00:00:00.000 Started by upstream project "autotest-per-patch" build number 132779 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.032 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:03.342 The recommended git tool is: git 00:00:03.343 using credential 00000000-0000-0000-0000-000000000002 00:00:03.346 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:03.358 Fetching changes from the remote Git repository 00:00:03.363 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:03.376 Using shallow fetch with depth 1 00:00:03.376 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:03.376 > git --version # timeout=10 00:00:03.387 > git --version # 'git version 2.39.2' 00:00:03.387 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:03.400 Setting http proxy: proxy-dmz.intel.com:911 00:00:03.400 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.035 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.044 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.055 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:10.055 > git config core.sparsecheckout # timeout=10 00:00:10.066 > git read-tree -mu HEAD # timeout=10 00:00:10.081 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:10.099 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:10.099 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:10.171 [Pipeline] Start of Pipeline 00:00:10.185 [Pipeline] library 00:00:10.186 Loading library shm_lib@master 00:00:10.187 Library shm_lib@master is cached. Copying from home. 00:00:10.203 [Pipeline] node 00:33:38.973 Resuming build at Mon Dec 09 09:44:46 UTC 2024 after Jenkins restart 00:33:38.976 Ready to run at Mon Dec 09 09:44:46 UTC 2024 00:33:46.968 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:33:46.974 [Pipeline] { 00:33:47.000 [Pipeline] catchError 00:33:47.003 [Pipeline] { 00:33:47.062 [Pipeline] wrap 00:33:47.082 [Pipeline] { 00:33:47.114 [Pipeline] stage 00:33:47.119 [Pipeline] { (Prologue) 00:33:47.191 [Pipeline] echo 00:33:47.195 Node: VM-host-WFP7 00:33:47.224 [Pipeline] cleanWs 00:33:47.313 [WS-CLEANUP] Deleting project workspace... 00:33:47.313 [WS-CLEANUP] Deferred wipeout is used... 00:33:47.338 [WS-CLEANUP] done 00:33:48.252 [Pipeline] setCustomBuildProperty 00:33:48.453 [Pipeline] httpRequest 00:33:52.617 [Pipeline] echo 00:33:52.620 Sorcerer 10.211.164.101 is dead 00:33:52.632 [Pipeline] httpRequest 00:33:52.807 [Pipeline] echo 00:33:52.809 Sorcerer 10.211.164.96 is dead 00:33:52.814 [Pipeline] httpRequest 00:33:55.838 [Pipeline] echo 00:33:55.839 Sorcerer 10.211.164.20 is dead 00:33:55.848 [Pipeline] httpRequest 00:33:58.871 [Pipeline] echo 00:33:58.872 Sorcerer 10.211.164.23 is dead 00:33:58.881 [Pipeline] httpRequest 00:34:01.907 [Pipeline] echo 00:34:01.909 Sorcerer 10.211.164.112 is dead 00:34:01.915 [Pipeline] retry 00:34:01.917 [Pipeline] { 00:34:01.939 [Pipeline] checkout 00:34:01.947 The recommended git tool is: git 00:34:03.577 using credential 00000000-0000-0000-0000-000000000002 00:34:03.588 Wiping out workspace first. 00:34:03.604 Cloning the remote Git repository 00:34:03.611 Honoring refspec on initial clone 00:34:03.671 Cloning repository https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:34:03.712 > git init /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp # timeout=10 00:34:03.763 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:34:03.763 > git --version # timeout=10 00:34:03.774 > git --version # 'git version 2.25.1' 00:34:03.776 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:03.794 Setting http proxy: proxy-dmz.intel.com:911 00:34:03.794 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=10 00:35:07.511 Avoid second fetch 00:35:07.527 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:35:07.621 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:35:07.624 [Pipeline] } 00:35:07.642 [Pipeline] // retry 00:35:07.658 [Pipeline] httpRequest 00:35:08.049 [Pipeline] echo 00:35:08.051 Sorcerer 10.211.164.112 is alive 00:35:08.062 [Pipeline] retry 00:35:08.064 [Pipeline] { 00:35:08.080 [Pipeline] httpRequest 00:35:08.085 HttpMethod: GET 00:35:08.086 URL: http://10.211.164.112/packages/spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:35:08.086 Sending request to url: http://10.211.164.112/packages/spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:35:08.090 Response Code: HTTP/1.1 404 Not Found 00:35:08.090 Success: Status code 404 is in the accepted range: 200,404 00:35:08.091 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:35:08.095 [Pipeline] } 00:35:08.114 [Pipeline] // retry 00:35:08.123 [Pipeline] sh 00:35:07.496 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:35:07.500 > git config --add remote.origin.fetch refs/heads/master # timeout=10 00:35:07.512 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:35:07.520 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:35:07.530 > git config core.sparsecheckout # timeout=10 00:35:07.534 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:35:08.412 + rm -f spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:35:08.425 [Pipeline] retry 00:35:08.427 [Pipeline] { 00:35:08.445 [Pipeline] checkout 00:35:08.452 The recommended git tool is: NONE 00:35:08.464 using credential 00000000-0000-0000-0000-000000000002 00:35:08.466 Wiping out workspace first. 00:35:08.475 Cloning the remote Git repository 00:35:08.478 Honoring refspec on initial clone 00:35:08.482 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:35:08.482 > git init /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk # timeout=10 00:35:08.490 Using reference repository: /var/ci_repos/spdk_multi 00:35:08.490 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:35:08.490 > git --version # timeout=10 00:35:08.495 > git --version # 'git version 2.25.1' 00:35:08.495 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:35:08.500 Setting http proxy: proxy-dmz.intel.com:911 00:35:08.500 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/95/25495/5 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:35:54.420 Avoid second fetch 00:35:54.440 Checking out Revision b71c8b8dd8fbedef346b576febb9d3eedde82b3c (FETCH_HEAD) 00:35:54.660 Commit message: "env: explicitly set --legacy-mem flag in no hugepages mode" 00:35:54.668 First time build. Skipping changelog. 00:35:54.396 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:35:54.402 > git config --add remote.origin.fetch refs/changes/95/25495/5 # timeout=10 00:35:54.407 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:35:54.422 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:35:54.432 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:35:54.443 > git config core.sparsecheckout # timeout=10 00:35:54.447 > git checkout -f b71c8b8dd8fbedef346b576febb9d3eedde82b3c # timeout=10 00:35:54.662 > git rev-list --no-walk c4269c6e2cd0445b86aa16195993e54ed2cad2dd # timeout=10 00:35:54.672 > git remote # timeout=10 00:35:54.675 > git submodule init # timeout=10 00:35:54.712 > git submodule sync # timeout=10 00:35:54.770 > git config --get remote.origin.url # timeout=10 00:35:54.780 > git submodule init # timeout=10 00:35:54.833 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:35:54.837 > git config --get submodule.dpdk.url # timeout=10 00:35:54.842 > git remote # timeout=10 00:35:54.846 > git config --get remote.origin.url # timeout=10 00:35:54.850 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:35:54.855 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:35:54.859 > git remote # timeout=10 00:35:54.862 > git config --get remote.origin.url # timeout=10 00:35:54.866 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:35:54.871 > git config --get submodule.isa-l.url # timeout=10 00:35:54.875 > git remote # timeout=10 00:35:54.880 > git config --get remote.origin.url # timeout=10 00:35:54.884 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:35:54.887 > git config --get submodule.ocf.url # timeout=10 00:35:54.889 > git remote # timeout=10 00:35:54.891 > git config --get remote.origin.url # timeout=10 00:35:54.894 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:35:54.896 > git config --get submodule.libvfio-user.url # timeout=10 00:35:54.899 > git remote # timeout=10 00:35:54.902 > git config --get remote.origin.url # timeout=10 00:35:54.905 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:35:54.908 > git config --get submodule.xnvme.url # timeout=10 00:35:54.911 > git remote # timeout=10 00:35:54.914 > git config --get remote.origin.url # timeout=10 00:35:54.918 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:35:54.922 > git config --get submodule.isa-l-crypto.url # timeout=10 00:35:54.926 > git remote # timeout=10 00:35:54.930 > git config --get remote.origin.url # timeout=10 00:35:54.933 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:35:54.941 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:35:54.941 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:35:54.941 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:35:54.941 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:35:54.941 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:35:54.941 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:35:54.941 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:35:54.945 Setting http proxy: proxy-dmz.intel.com:911 00:35:54.945 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:35:54.946 Setting http proxy: proxy-dmz.intel.com:911 00:35:54.946 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:35:54.947 Setting http proxy: proxy-dmz.intel.com:911 00:35:54.947 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:35:54.947 Setting http proxy: proxy-dmz.intel.com:911 00:35:54.947 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:35:54.947 Setting http proxy: proxy-dmz.intel.com:911 00:35:54.947 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:35:54.947 Setting http proxy: proxy-dmz.intel.com:911 00:35:54.948 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:35:54.948 Setting http proxy: proxy-dmz.intel.com:911 00:35:54.948 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:36:40.364 [Pipeline] dir 00:36:40.365 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:36:40.367 [Pipeline] { 00:36:40.381 [Pipeline] sh 00:36:40.671 ++ nproc 00:36:40.671 + threads=80 00:36:40.671 + git repack -a -d --threads=80 00:36:47.279 + git submodule foreach git repack -a -d --threads=80 00:36:47.279 Entering 'dpdk' 00:36:50.585 Entering 'intel-ipsec-mb' 00:36:50.846 Entering 'isa-l' 00:36:50.846 Entering 'isa-l-crypto' 00:36:50.846 Entering 'libvfio-user' 00:36:50.846 Entering 'ocf' 00:36:51.105 Entering 'xnvme' 00:36:51.366 + find .git -type f -name alternates -print -delete 00:36:51.366 .git/modules/libvfio-user/objects/info/alternates 00:36:51.366 .git/modules/isa-l-crypto/objects/info/alternates 00:36:51.366 .git/modules/ocf/objects/info/alternates 00:36:51.366 .git/modules/intel-ipsec-mb/objects/info/alternates 00:36:51.366 .git/modules/isa-l/objects/info/alternates 00:36:51.366 .git/modules/xnvme/objects/info/alternates 00:36:51.366 .git/modules/dpdk/objects/info/alternates 00:36:51.366 .git/objects/info/alternates 00:36:51.376 [Pipeline] } 00:36:51.395 [Pipeline] // dir 00:36:51.402 [Pipeline] } 00:36:51.420 [Pipeline] // retry 00:36:51.428 [Pipeline] sh 00:36:51.717 + hash pigz 00:36:51.717 + tar -czf spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz spdk 00:37:06.673 [Pipeline] retry 00:37:06.675 [Pipeline] { 00:37:06.689 [Pipeline] httpRequest 00:37:06.701 HttpMethod: PUT 00:37:06.702 URL: http://10.211.164.112/cgi-bin/sorcerer.py?group=packages&filename=spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:37:06.702 Sending request to url: http://10.211.164.112/cgi-bin/sorcerer.py?group=packages&filename=spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:37:16.843 Response Code: HTTP/1.1 200 OK 00:37:16.854 Success: Status code 200 is in the accepted range: 200 00:37:16.857 [Pipeline] } 00:37:16.875 [Pipeline] // retry 00:37:16.882 [Pipeline] echo 00:37:16.884 00:37:16.884 Locking 00:37:16.884 Waited 2s for lock 00:37:16.884 File already exists: /storage/packages/spdk_b71c8b8dd8fbedef346b576febb9d3eedde82b3c.tar.gz 00:37:16.884 00:37:16.888 [Pipeline] sh 00:37:17.185 + git -C spdk log --oneline -n5 00:37:17.185 b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode 00:37:17.185 496bfd677 env: match legacy mem mode config with DPDK 00:37:17.185 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:37:17.185 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:37:17.185 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:37:17.211 [Pipeline] writeFile 00:37:17.231 [Pipeline] sh 00:37:17.531 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:37:17.549 [Pipeline] sh 00:37:17.848 + cat autorun-spdk.conf 00:37:17.849 SPDK_RUN_FUNCTIONAL_TEST=1 00:37:17.849 SPDK_TEST_NVMF=1 00:37:17.849 SPDK_TEST_NVMF_TRANSPORT=tcp 00:37:17.849 SPDK_TEST_USDT=1 00:37:17.849 SPDK_TEST_NVMF_MDNS=1 00:37:17.849 SPDK_RUN_UBSAN=1 00:37:17.849 NET_TYPE=virt 00:37:17.849 SPDK_JSONRPC_GO_CLIENT=1 00:37:17.849 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:37:17.861 RUN_NIGHTLY=0 00:37:17.864 [Pipeline] } 00:37:17.880 [Pipeline] // stage 00:37:17.902 [Pipeline] stage 00:37:17.905 [Pipeline] { (Run VM) 00:37:17.923 [Pipeline] sh 00:37:18.222 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:37:18.223 + echo 'Start stage prepare_nvme.sh' 00:37:18.223 Start stage prepare_nvme.sh 00:37:18.223 + [[ -n 5 ]] 00:37:18.223 + disk_prefix=ex5 00:37:18.223 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:37:18.223 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:37:18.223 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:37:18.223 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:37:18.223 ++ SPDK_TEST_NVMF=1 00:37:18.223 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:37:18.223 ++ SPDK_TEST_USDT=1 00:37:18.223 ++ SPDK_TEST_NVMF_MDNS=1 00:37:18.223 ++ SPDK_RUN_UBSAN=1 00:37:18.223 ++ NET_TYPE=virt 00:37:18.223 ++ SPDK_JSONRPC_GO_CLIENT=1 00:37:18.223 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:37:18.223 ++ RUN_NIGHTLY=0 00:37:18.223 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:37:18.223 + nvme_files=() 00:37:18.223 + declare -A nvme_files 00:37:18.223 + backend_dir=/var/lib/libvirt/images/backends 00:37:18.223 + nvme_files['nvme.img']=5G 00:37:18.223 + nvme_files['nvme-cmb.img']=5G 00:37:18.223 + nvme_files['nvme-multi0.img']=4G 00:37:18.223 + nvme_files['nvme-multi1.img']=4G 00:37:18.223 + nvme_files['nvme-multi2.img']=4G 00:37:18.223 + nvme_files['nvme-openstack.img']=8G 00:37:18.223 + nvme_files['nvme-zns.img']=5G 00:37:18.223 + (( SPDK_TEST_NVME_PMR == 1 )) 00:37:18.223 + (( SPDK_TEST_FTL == 1 )) 00:37:18.223 + (( SPDK_TEST_NVME_FDP == 1 )) 00:37:18.223 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:37:18.223 + for nvme in "${!nvme_files[@]}" 00:37:18.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:37:18.223 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:37:18.223 + for nvme in "${!nvme_files[@]}" 00:37:18.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:37:18.223 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:37:18.223 + for nvme in "${!nvme_files[@]}" 00:37:18.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:37:18.223 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:37:18.223 + for nvme in "${!nvme_files[@]}" 00:37:18.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:37:18.223 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:37:18.223 + for nvme in "${!nvme_files[@]}" 00:37:18.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:37:18.223 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:37:18.223 + for nvme in "${!nvme_files[@]}" 00:37:18.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:37:18.223 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:37:18.223 + for nvme in "${!nvme_files[@]}" 00:37:18.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:37:18.488 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:37:18.488 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:37:18.488 + echo 'End stage prepare_nvme.sh' 00:37:18.488 End stage prepare_nvme.sh 00:37:18.504 [Pipeline] sh 00:37:18.803 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:37:18.803 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:37:18.803 00:37:18.803 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:37:18.803 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:37:18.803 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:37:18.803 HELP=0 00:37:18.803 DRY_RUN=0 00:37:18.803 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:37:18.803 NVME_DISKS_TYPE=nvme,nvme, 00:37:18.803 NVME_AUTO_CREATE=0 00:37:18.803 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:37:18.803 NVME_CMB=,, 00:37:18.803 NVME_PMR=,, 00:37:18.803 NVME_ZNS=,, 00:37:18.803 NVME_MS=,, 00:37:18.803 NVME_FDP=,, 00:37:18.803 SPDK_VAGRANT_DISTRO=fedora39 00:37:18.803 SPDK_VAGRANT_VMCPU=10 00:37:18.803 SPDK_VAGRANT_VMRAM=12288 00:37:18.803 SPDK_VAGRANT_PROVIDER=libvirt 00:37:18.803 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:37:18.803 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:37:18.803 SPDK_OPENSTACK_NETWORK=0 00:37:18.803 VAGRANT_PACKAGE_BOX=0 00:37:18.803 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:37:18.803 FORCE_DISTRO=true 00:37:18.803 VAGRANT_BOX_VERSION= 00:37:18.803 EXTRA_VAGRANTFILES= 00:37:18.803 NIC_MODEL=virtio 00:37:18.803 00:37:18.803 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:37:18.803 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:37:21.347 Bringing machine 'default' up with 'libvirt' provider... 00:37:21.607 ==> default: Creating image (snapshot of base box volume). 00:37:21.868 ==> default: Creating domain with the following settings... 00:37:21.868 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733737708_712bef741996b84f67d8 00:37:21.868 ==> default: -- Domain type: kvm 00:37:21.868 ==> default: -- Cpus: 10 00:37:21.868 ==> default: -- Feature: acpi 00:37:21.868 ==> default: -- Feature: apic 00:37:21.868 ==> default: -- Feature: pae 00:37:21.868 ==> default: -- Memory: 12288M 00:37:21.868 ==> default: -- Memory Backing: hugepages: 00:37:21.868 ==> default: -- Management MAC: 00:37:21.868 ==> default: -- Loader: 00:37:21.868 ==> default: -- Nvram: 00:37:21.868 ==> default: -- Base box: spdk/fedora39 00:37:21.868 ==> default: -- Storage pool: default 00:37:21.868 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733737708_712bef741996b84f67d8.img (20G) 00:37:21.868 ==> default: -- Volume Cache: default 00:37:21.868 ==> default: -- Kernel: 00:37:21.868 ==> default: -- Initrd: 00:37:21.868 ==> default: -- Graphics Type: vnc 00:37:21.868 ==> default: -- Graphics Port: -1 00:37:21.868 ==> default: -- Graphics IP: 127.0.0.1 00:37:21.868 ==> default: -- Graphics Password: Not defined 00:37:21.868 ==> default: -- Video Type: cirrus 00:37:21.868 ==> default: -- Video VRAM: 9216 00:37:21.868 ==> default: -- Sound Type: 00:37:21.868 ==> default: -- Keymap: en-us 00:37:21.868 ==> default: -- TPM Path: 00:37:21.868 ==> default: -- INPUT: type=mouse, bus=ps2 00:37:21.868 ==> default: -- Command line args: 00:37:21.868 ==> default: -> value=-device, 00:37:21.868 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:37:21.868 ==> default: -> value=-drive, 00:37:21.868 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:37:21.868 ==> default: -> value=-device, 00:37:21.868 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:37:21.868 ==> default: -> value=-device, 00:37:21.868 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:37:21.868 ==> default: -> value=-drive, 00:37:21.868 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:37:21.868 ==> default: -> value=-device, 00:37:21.868 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:37:21.868 ==> default: -> value=-drive, 00:37:21.868 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:37:21.868 ==> default: -> value=-device, 00:37:21.868 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:37:21.868 ==> default: -> value=-drive, 00:37:21.868 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:37:21.868 ==> default: -> value=-device, 00:37:21.868 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:37:22.127 ==> default: Creating shared folders metadata... 00:37:22.127 ==> default: Starting domain. 00:37:23.506 ==> default: Waiting for domain to get an IP address... 00:37:41.606 ==> default: Waiting for SSH to become available... 00:37:41.606 ==> default: Configuring and enabling network interfaces... 00:37:48.193 default: SSH address: 192.168.121.93:22 00:37:48.193 default: SSH username: vagrant 00:37:48.193 default: SSH auth method: private key 00:37:50.101 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:38:00.108 ==> default: Mounting SSHFS shared folder... 00:38:01.491 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:38:01.491 ==> default: Checking Mount.. 00:38:02.873 ==> default: Folder Successfully Mounted! 00:38:02.873 ==> default: Running provisioner: file... 00:38:04.256 default: ~/.gitconfig => .gitconfig 00:38:04.827 00:38:04.827 SUCCESS! 00:38:04.827 00:38:04.827 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:38:04.827 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:38:04.827 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:38:04.828 00:38:04.838 [Pipeline] } 00:38:04.853 [Pipeline] // stage 00:38:04.863 [Pipeline] dir 00:38:04.864 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:38:04.865 [Pipeline] { 00:38:04.878 [Pipeline] catchError 00:38:04.880 [Pipeline] { 00:38:04.893 [Pipeline] sh 00:38:05.178 + vagrant ssh-config+ --host vagrant 00:38:05.178 sed -ne /^Host/,$p 00:38:05.178 + tee ssh_conf 00:38:08.476 Host vagrant 00:38:08.476 HostName 192.168.121.93 00:38:08.476 User vagrant 00:38:08.476 Port 22 00:38:08.476 UserKnownHostsFile /dev/null 00:38:08.476 StrictHostKeyChecking no 00:38:08.476 PasswordAuthentication no 00:38:08.476 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:38:08.476 IdentitiesOnly yes 00:38:08.476 LogLevel FATAL 00:38:08.476 ForwardAgent yes 00:38:08.476 ForwardX11 yes 00:38:08.476 00:38:08.490 [Pipeline] withEnv 00:38:08.492 [Pipeline] { 00:38:08.509 [Pipeline] sh 00:38:08.795 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:38:08.795 source /etc/os-release 00:38:08.795 [[ -e /image.version ]] && img=$(< /image.version) 00:38:08.795 # Minimal, systemd-like check. 00:38:08.795 if [[ -e /.dockerenv ]]; then 00:38:08.795 # Clear garbage from the node's name: 00:38:08.795 # agt-er_autotest_547-896 -> autotest_547-896 00:38:08.795 # $HOSTNAME is the actual container id 00:38:08.795 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:38:08.795 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:38:08.795 # We can assume this is a mount from a host where container is running, 00:38:08.795 # so fetch its hostname to easily identify the target swarm worker. 00:38:08.795 container="$(< /etc/hostname) ($agent)" 00:38:08.795 else 00:38:08.795 # Fallback 00:38:08.795 container=$agent 00:38:08.795 fi 00:38:08.795 fi 00:38:08.795 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:38:08.795 00:38:09.070 [Pipeline] } 00:38:09.089 [Pipeline] // withEnv 00:38:09.100 [Pipeline] setCustomBuildProperty 00:38:09.117 [Pipeline] stage 00:38:09.120 [Pipeline] { (Tests) 00:38:09.139 [Pipeline] sh 00:38:09.429 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:38:09.707 [Pipeline] sh 00:38:09.993 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:38:10.272 [Pipeline] timeout 00:38:10.273 Timeout set to expire in 1 hr 0 min 00:38:10.275 [Pipeline] { 00:38:10.291 [Pipeline] sh 00:38:10.576 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:38:11.149 HEAD is now at b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode 00:38:11.163 [Pipeline] sh 00:38:11.450 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:38:11.727 [Pipeline] sh 00:38:12.015 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:38:12.293 [Pipeline] sh 00:38:12.582 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:38:12.929 ++ readlink -f spdk_repo 00:38:12.929 + DIR_ROOT=/home/vagrant/spdk_repo 00:38:12.929 + [[ -n /home/vagrant/spdk_repo ]] 00:38:12.929 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:38:12.929 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:38:12.929 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:38:12.929 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:38:12.929 + [[ -d /home/vagrant/spdk_repo/output ]] 00:38:12.929 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:38:12.929 + cd /home/vagrant/spdk_repo 00:38:12.929 + source /etc/os-release 00:38:12.929 ++ NAME='Fedora Linux' 00:38:12.929 ++ VERSION='39 (Cloud Edition)' 00:38:12.929 ++ ID=fedora 00:38:12.929 ++ VERSION_ID=39 00:38:12.929 ++ VERSION_CODENAME= 00:38:12.929 ++ PLATFORM_ID=platform:f39 00:38:12.929 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:38:12.929 ++ ANSI_COLOR='0;38;2;60;110;180' 00:38:12.929 ++ LOGO=fedora-logo-icon 00:38:12.929 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:38:12.929 ++ HOME_URL=https://fedoraproject.org/ 00:38:12.929 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:38:12.929 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:38:12.929 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:38:12.929 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:38:12.929 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:38:12.929 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:38:12.929 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:38:12.929 ++ SUPPORT_END=2024-11-12 00:38:12.929 ++ VARIANT='Cloud Edition' 00:38:12.929 ++ VARIANT_ID=cloud 00:38:12.929 + uname -a 00:38:12.930 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:38:12.930 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:38:13.501 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:13.501 Hugepages 00:38:13.501 node hugesize free / total 00:38:13.501 node0 1048576kB 0 / 0 00:38:13.501 node0 2048kB 0 / 0 00:38:13.501 00:38:13.501 Type BDF Vendor Device NUMA Driver Device Block devices 00:38:13.501 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:38:13.501 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:38:13.501 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:38:13.501 + rm -f /tmp/spdk-ld-path 00:38:13.501 + source autorun-spdk.conf 00:38:13.501 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:38:13.501 ++ SPDK_TEST_NVMF=1 00:38:13.501 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:38:13.501 ++ SPDK_TEST_USDT=1 00:38:13.501 ++ SPDK_TEST_NVMF_MDNS=1 00:38:13.501 ++ SPDK_RUN_UBSAN=1 00:38:13.501 ++ NET_TYPE=virt 00:38:13.501 ++ SPDK_JSONRPC_GO_CLIENT=1 00:38:13.501 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:38:13.501 ++ RUN_NIGHTLY=0 00:38:13.501 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:38:13.501 + [[ -n '' ]] 00:38:13.501 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:38:13.501 + for M in /var/spdk/build-*-manifest.txt 00:38:13.501 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:38:13.501 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:38:13.501 + for M in /var/spdk/build-*-manifest.txt 00:38:13.501 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:38:13.501 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:38:13.501 + for M in /var/spdk/build-*-manifest.txt 00:38:13.501 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:38:13.501 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:38:13.501 ++ uname 00:38:13.501 + [[ Linux == \L\i\n\u\x ]] 00:38:13.501 + sudo dmesg -T 00:38:13.501 + sudo dmesg --clear 00:38:13.761 + dmesg_pid=5423 00:38:13.761 + sudo dmesg -Tw 00:38:13.761 + [[ Fedora Linux == FreeBSD ]] 00:38:13.761 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:13.761 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:13.761 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:38:13.761 + [[ -x /usr/src/fio-static/fio ]] 00:38:13.761 + export FIO_BIN=/usr/src/fio-static/fio 00:38:13.761 + FIO_BIN=/usr/src/fio-static/fio 00:38:13.761 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:38:13.761 + [[ ! -v VFIO_QEMU_BIN ]] 00:38:13.761 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:38:13.761 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:38:13.761 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:38:13.761 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:38:13.761 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:38:13.761 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:38:13.761 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:38:13.761 09:49:20 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:38:13.761 09:49:20 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:38:13.761 09:49:20 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:38:13.761 09:49:20 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:38:13.761 09:49:20 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:38:13.761 09:49:20 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:38:13.761 09:49:20 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:38:13.761 09:49:20 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:38:13.761 09:49:20 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:38:13.761 09:49:20 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:38:13.761 09:49:20 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:38:13.761 09:49:20 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:38:13.761 09:49:20 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:38:13.761 09:49:20 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:38:13.761 09:49:20 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:38:13.761 09:49:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:13.761 09:49:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:38:13.761 09:49:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:13.761 09:49:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:13.761 09:49:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:13.761 09:49:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.761 09:49:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.761 09:49:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.761 09:49:20 -- paths/export.sh@5 -- $ export PATH 00:38:13.761 09:49:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.761 09:49:20 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:38:13.761 09:49:20 -- common/autobuild_common.sh@493 -- $ date +%s 00:38:13.761 09:49:20 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733737760.XXXXXX 00:38:13.761 09:49:20 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733737760.E22O8N 00:38:13.761 09:49:20 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:38:13.761 09:49:20 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:38:13.761 09:49:20 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:38:13.761 09:49:20 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:38:13.761 09:49:20 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:38:13.761 09:49:20 -- common/autobuild_common.sh@509 -- $ get_config_params 00:38:13.761 09:49:20 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:38:13.761 09:49:20 -- common/autotest_common.sh@10 -- $ set +x 00:38:14.022 09:49:20 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:38:14.022 09:49:20 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:38:14.022 09:49:20 -- pm/common@17 -- $ local monitor 00:38:14.022 09:49:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.022 09:49:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.022 09:49:20 -- pm/common@25 -- $ sleep 1 00:38:14.022 09:49:20 -- pm/common@21 -- $ date +%s 00:38:14.022 09:49:20 -- pm/common@21 -- $ date +%s 00:38:14.022 09:49:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733737760 00:38:14.022 09:49:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733737760 00:38:14.022 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733737760_collect-vmstat.pm.log 00:38:14.022 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733737760_collect-cpu-load.pm.log 00:38:14.963 09:49:21 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:38:14.963 09:49:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:38:14.963 09:49:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:38:14.963 09:49:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:38:14.963 09:49:21 -- spdk/autobuild.sh@16 -- $ date -u 00:38:14.963 Mon Dec 9 09:49:21 AM UTC 2024 00:38:14.963 09:49:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:38:14.963 v25.01-pre-313-gb71c8b8dd 00:38:14.963 09:49:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:38:14.963 09:49:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:38:14.963 09:49:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:38:14.963 09:49:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:38:14.963 09:49:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:38:14.963 09:49:21 -- common/autotest_common.sh@10 -- $ set +x 00:38:14.963 ************************************ 00:38:14.963 START TEST ubsan 00:38:14.963 ************************************ 00:38:14.963 using ubsan 00:38:14.963 09:49:21 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:38:14.963 00:38:14.963 real 0m0.000s 00:38:14.963 user 0m0.000s 00:38:14.963 sys 0m0.000s 00:38:14.963 09:49:21 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:38:14.963 09:49:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:38:14.963 ************************************ 00:38:14.963 END TEST ubsan 00:38:14.963 ************************************ 00:38:14.963 09:49:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:38:14.963 09:49:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:38:14.964 09:49:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:38:14.964 09:49:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:38:14.964 09:49:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:38:14.964 09:49:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:38:14.964 09:49:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:38:14.964 09:49:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:38:14.964 09:49:21 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:38:15.224 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:15.224 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:38:15.794 Using 'verbs' RDMA provider 00:38:31.752 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:38:43.958 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:38:44.217 go version go1.21.1 linux/amd64 00:38:44.785 Creating mk/config.mk...done. 00:38:44.785 Creating mk/cc.flags.mk...done. 00:38:44.785 Type 'make' to build. 00:38:44.785 09:49:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:38:44.785 09:49:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:38:44.785 09:49:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:38:44.785 09:49:51 -- common/autotest_common.sh@10 -- $ set +x 00:38:44.785 ************************************ 00:38:44.785 START TEST make 00:38:44.785 ************************************ 00:38:44.785 09:49:51 make -- common/autotest_common.sh@1129 -- $ make -j10 00:38:45.043 make[1]: Nothing to be done for 'all'. 00:39:06.973 The Meson build system 00:39:06.973 Version: 1.5.0 00:39:06.973 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:39:06.973 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:39:06.973 Build type: native build 00:39:06.973 Program cat found: YES (/usr/bin/cat) 00:39:06.973 Project name: DPDK 00:39:06.973 Project version: 24.03.0 00:39:06.973 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:39:06.973 C linker for the host machine: cc ld.bfd 2.40-14 00:39:06.973 Host machine cpu family: x86_64 00:39:06.973 Host machine cpu: x86_64 00:39:06.973 Message: ## Building in Developer Mode ## 00:39:06.973 Program pkg-config found: YES (/usr/bin/pkg-config) 00:39:06.973 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:39:06.973 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:39:06.973 Program python3 found: YES (/usr/bin/python3) 00:39:06.973 Program cat found: YES (/usr/bin/cat) 00:39:06.973 Compiler for C supports arguments -march=native: YES 00:39:06.973 Checking for size of "void *" : 8 00:39:06.973 Checking for size of "void *" : 8 (cached) 00:39:06.973 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:39:06.973 Library m found: YES 00:39:06.973 Library numa found: YES 00:39:06.973 Has header "numaif.h" : YES 00:39:06.973 Library fdt found: NO 00:39:06.973 Library execinfo found: NO 00:39:06.973 Has header "execinfo.h" : YES 00:39:06.973 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:39:06.973 Run-time dependency libarchive found: NO (tried pkgconfig) 00:39:06.973 Run-time dependency libbsd found: NO (tried pkgconfig) 00:39:06.973 Run-time dependency jansson found: NO (tried pkgconfig) 00:39:06.973 Run-time dependency openssl found: YES 3.1.1 00:39:06.973 Run-time dependency libpcap found: YES 1.10.4 00:39:06.973 Has header "pcap.h" with dependency libpcap: YES 00:39:06.973 Compiler for C supports arguments -Wcast-qual: YES 00:39:06.973 Compiler for C supports arguments -Wdeprecated: YES 00:39:06.973 Compiler for C supports arguments -Wformat: YES 00:39:06.973 Compiler for C supports arguments -Wformat-nonliteral: NO 00:39:06.973 Compiler for C supports arguments -Wformat-security: NO 00:39:06.973 Compiler for C supports arguments -Wmissing-declarations: YES 00:39:06.973 Compiler for C supports arguments -Wmissing-prototypes: YES 00:39:06.973 Compiler for C supports arguments -Wnested-externs: YES 00:39:06.973 Compiler for C supports arguments -Wold-style-definition: YES 00:39:06.973 Compiler for C supports arguments -Wpointer-arith: YES 00:39:06.973 Compiler for C supports arguments -Wsign-compare: YES 00:39:06.973 Compiler for C supports arguments -Wstrict-prototypes: YES 00:39:06.973 Compiler for C supports arguments -Wundef: YES 00:39:06.973 Compiler for C supports arguments -Wwrite-strings: YES 00:39:06.973 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:39:06.973 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:39:06.973 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:39:06.973 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:39:06.973 Program objdump found: YES (/usr/bin/objdump) 00:39:06.973 Compiler for C supports arguments -mavx512f: YES 00:39:06.973 Checking if "AVX512 checking" compiles: YES 00:39:06.973 Fetching value of define "__SSE4_2__" : 1 00:39:06.973 Fetching value of define "__AES__" : 1 00:39:06.973 Fetching value of define "__AVX__" : 1 00:39:06.973 Fetching value of define "__AVX2__" : 1 00:39:06.973 Fetching value of define "__AVX512BW__" : 1 00:39:06.973 Fetching value of define "__AVX512CD__" : 1 00:39:06.973 Fetching value of define "__AVX512DQ__" : 1 00:39:06.973 Fetching value of define "__AVX512F__" : 1 00:39:06.973 Fetching value of define "__AVX512VL__" : 1 00:39:06.973 Fetching value of define "__PCLMUL__" : 1 00:39:06.973 Fetching value of define "__RDRND__" : 1 00:39:06.973 Fetching value of define "__RDSEED__" : 1 00:39:06.973 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:39:06.973 Fetching value of define "__znver1__" : (undefined) 00:39:06.973 Fetching value of define "__znver2__" : (undefined) 00:39:06.974 Fetching value of define "__znver3__" : (undefined) 00:39:06.974 Fetching value of define "__znver4__" : (undefined) 00:39:06.974 Compiler for C supports arguments -Wno-format-truncation: YES 00:39:06.974 Message: lib/log: Defining dependency "log" 00:39:06.974 Message: lib/kvargs: Defining dependency "kvargs" 00:39:06.974 Message: lib/telemetry: Defining dependency "telemetry" 00:39:06.974 Checking for function "getentropy" : NO 00:39:06.974 Message: lib/eal: Defining dependency "eal" 00:39:06.974 Message: lib/ring: Defining dependency "ring" 00:39:06.974 Message: lib/rcu: Defining dependency "rcu" 00:39:06.974 Message: lib/mempool: Defining dependency "mempool" 00:39:06.974 Message: lib/mbuf: Defining dependency "mbuf" 00:39:06.974 Fetching value of define "__PCLMUL__" : 1 (cached) 00:39:06.974 Fetching value of define "__AVX512F__" : 1 (cached) 00:39:06.974 Fetching value of define "__AVX512BW__" : 1 (cached) 00:39:06.974 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:39:06.974 Fetching value of define "__AVX512VL__" : 1 (cached) 00:39:06.974 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:39:06.974 Compiler for C supports arguments -mpclmul: YES 00:39:06.974 Compiler for C supports arguments -maes: YES 00:39:06.974 Compiler for C supports arguments -mavx512f: YES (cached) 00:39:06.974 Compiler for C supports arguments -mavx512bw: YES 00:39:06.974 Compiler for C supports arguments -mavx512dq: YES 00:39:06.974 Compiler for C supports arguments -mavx512vl: YES 00:39:06.974 Compiler for C supports arguments -mvpclmulqdq: YES 00:39:06.974 Compiler for C supports arguments -mavx2: YES 00:39:06.974 Compiler for C supports arguments -mavx: YES 00:39:06.974 Message: lib/net: Defining dependency "net" 00:39:06.974 Message: lib/meter: Defining dependency "meter" 00:39:06.974 Message: lib/ethdev: Defining dependency "ethdev" 00:39:06.974 Message: lib/pci: Defining dependency "pci" 00:39:06.974 Message: lib/cmdline: Defining dependency "cmdline" 00:39:06.974 Message: lib/hash: Defining dependency "hash" 00:39:06.974 Message: lib/timer: Defining dependency "timer" 00:39:06.974 Message: lib/compressdev: Defining dependency "compressdev" 00:39:06.974 Message: lib/cryptodev: Defining dependency "cryptodev" 00:39:06.974 Message: lib/dmadev: Defining dependency "dmadev" 00:39:06.974 Compiler for C supports arguments -Wno-cast-qual: YES 00:39:06.974 Message: lib/power: Defining dependency "power" 00:39:06.974 Message: lib/reorder: Defining dependency "reorder" 00:39:06.974 Message: lib/security: Defining dependency "security" 00:39:06.974 Has header "linux/userfaultfd.h" : YES 00:39:06.974 Has header "linux/vduse.h" : YES 00:39:06.974 Message: lib/vhost: Defining dependency "vhost" 00:39:06.974 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:39:06.974 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:39:06.974 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:39:06.974 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:39:06.974 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:39:06.974 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:39:06.974 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:39:06.974 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:39:06.974 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:39:06.974 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:39:06.974 Program doxygen found: YES (/usr/local/bin/doxygen) 00:39:06.974 Configuring doxy-api-html.conf using configuration 00:39:06.974 Configuring doxy-api-man.conf using configuration 00:39:06.974 Program mandb found: YES (/usr/bin/mandb) 00:39:06.974 Program sphinx-build found: NO 00:39:06.974 Configuring rte_build_config.h using configuration 00:39:06.974 Message: 00:39:06.974 ================= 00:39:06.974 Applications Enabled 00:39:06.974 ================= 00:39:06.974 00:39:06.974 apps: 00:39:06.974 00:39:06.974 00:39:06.974 Message: 00:39:06.974 ================= 00:39:06.974 Libraries Enabled 00:39:06.974 ================= 00:39:06.974 00:39:06.974 libs: 00:39:06.974 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:39:06.974 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:39:06.974 cryptodev, dmadev, power, reorder, security, vhost, 00:39:06.974 00:39:06.974 Message: 00:39:06.974 =============== 00:39:06.974 Drivers Enabled 00:39:06.974 =============== 00:39:06.974 00:39:06.974 common: 00:39:06.974 00:39:06.974 bus: 00:39:06.974 pci, vdev, 00:39:06.974 mempool: 00:39:06.974 ring, 00:39:06.974 dma: 00:39:06.974 00:39:06.974 net: 00:39:06.974 00:39:06.974 crypto: 00:39:06.974 00:39:06.974 compress: 00:39:06.974 00:39:06.974 vdpa: 00:39:06.974 00:39:06.974 00:39:06.974 Message: 00:39:06.974 ================= 00:39:06.974 Content Skipped 00:39:06.974 ================= 00:39:06.974 00:39:06.974 apps: 00:39:06.974 dumpcap: explicitly disabled via build config 00:39:06.974 graph: explicitly disabled via build config 00:39:06.974 pdump: explicitly disabled via build config 00:39:06.974 proc-info: explicitly disabled via build config 00:39:06.974 test-acl: explicitly disabled via build config 00:39:06.974 test-bbdev: explicitly disabled via build config 00:39:06.974 test-cmdline: explicitly disabled via build config 00:39:06.974 test-compress-perf: explicitly disabled via build config 00:39:06.974 test-crypto-perf: explicitly disabled via build config 00:39:06.974 test-dma-perf: explicitly disabled via build config 00:39:06.974 test-eventdev: explicitly disabled via build config 00:39:06.975 test-fib: explicitly disabled via build config 00:39:06.975 test-flow-perf: explicitly disabled via build config 00:39:06.975 test-gpudev: explicitly disabled via build config 00:39:06.975 test-mldev: explicitly disabled via build config 00:39:06.975 test-pipeline: explicitly disabled via build config 00:39:06.975 test-pmd: explicitly disabled via build config 00:39:06.975 test-regex: explicitly disabled via build config 00:39:06.975 test-sad: explicitly disabled via build config 00:39:06.975 test-security-perf: explicitly disabled via build config 00:39:06.975 00:39:06.975 libs: 00:39:06.975 argparse: explicitly disabled via build config 00:39:06.975 metrics: explicitly disabled via build config 00:39:06.975 acl: explicitly disabled via build config 00:39:06.975 bbdev: explicitly disabled via build config 00:39:06.975 bitratestats: explicitly disabled via build config 00:39:06.975 bpf: explicitly disabled via build config 00:39:06.975 cfgfile: explicitly disabled via build config 00:39:06.975 distributor: explicitly disabled via build config 00:39:06.975 efd: explicitly disabled via build config 00:39:06.975 eventdev: explicitly disabled via build config 00:39:06.975 dispatcher: explicitly disabled via build config 00:39:06.975 gpudev: explicitly disabled via build config 00:39:06.975 gro: explicitly disabled via build config 00:39:06.975 gso: explicitly disabled via build config 00:39:06.975 ip_frag: explicitly disabled via build config 00:39:06.975 jobstats: explicitly disabled via build config 00:39:06.975 latencystats: explicitly disabled via build config 00:39:06.975 lpm: explicitly disabled via build config 00:39:06.975 member: explicitly disabled via build config 00:39:06.975 pcapng: explicitly disabled via build config 00:39:06.975 rawdev: explicitly disabled via build config 00:39:06.975 regexdev: explicitly disabled via build config 00:39:06.975 mldev: explicitly disabled via build config 00:39:06.975 rib: explicitly disabled via build config 00:39:06.975 sched: explicitly disabled via build config 00:39:06.975 stack: explicitly disabled via build config 00:39:06.975 ipsec: explicitly disabled via build config 00:39:06.975 pdcp: explicitly disabled via build config 00:39:06.975 fib: explicitly disabled via build config 00:39:06.975 port: explicitly disabled via build config 00:39:06.975 pdump: explicitly disabled via build config 00:39:06.975 table: explicitly disabled via build config 00:39:06.975 pipeline: explicitly disabled via build config 00:39:06.975 graph: explicitly disabled via build config 00:39:06.975 node: explicitly disabled via build config 00:39:06.975 00:39:06.975 drivers: 00:39:06.975 common/cpt: not in enabled drivers build config 00:39:06.975 common/dpaax: not in enabled drivers build config 00:39:06.975 common/iavf: not in enabled drivers build config 00:39:06.975 common/idpf: not in enabled drivers build config 00:39:06.975 common/ionic: not in enabled drivers build config 00:39:06.975 common/mvep: not in enabled drivers build config 00:39:06.975 common/octeontx: not in enabled drivers build config 00:39:06.975 bus/auxiliary: not in enabled drivers build config 00:39:06.975 bus/cdx: not in enabled drivers build config 00:39:06.975 bus/dpaa: not in enabled drivers build config 00:39:06.975 bus/fslmc: not in enabled drivers build config 00:39:06.975 bus/ifpga: not in enabled drivers build config 00:39:06.975 bus/platform: not in enabled drivers build config 00:39:06.975 bus/uacce: not in enabled drivers build config 00:39:06.975 bus/vmbus: not in enabled drivers build config 00:39:06.975 common/cnxk: not in enabled drivers build config 00:39:06.975 common/mlx5: not in enabled drivers build config 00:39:06.975 common/nfp: not in enabled drivers build config 00:39:06.975 common/nitrox: not in enabled drivers build config 00:39:06.975 common/qat: not in enabled drivers build config 00:39:06.975 common/sfc_efx: not in enabled drivers build config 00:39:06.975 mempool/bucket: not in enabled drivers build config 00:39:06.975 mempool/cnxk: not in enabled drivers build config 00:39:06.975 mempool/dpaa: not in enabled drivers build config 00:39:06.975 mempool/dpaa2: not in enabled drivers build config 00:39:06.975 mempool/octeontx: not in enabled drivers build config 00:39:06.975 mempool/stack: not in enabled drivers build config 00:39:06.975 dma/cnxk: not in enabled drivers build config 00:39:06.975 dma/dpaa: not in enabled drivers build config 00:39:06.975 dma/dpaa2: not in enabled drivers build config 00:39:06.975 dma/hisilicon: not in enabled drivers build config 00:39:06.975 dma/idxd: not in enabled drivers build config 00:39:06.975 dma/ioat: not in enabled drivers build config 00:39:06.975 dma/skeleton: not in enabled drivers build config 00:39:06.975 net/af_packet: not in enabled drivers build config 00:39:06.975 net/af_xdp: not in enabled drivers build config 00:39:06.975 net/ark: not in enabled drivers build config 00:39:06.975 net/atlantic: not in enabled drivers build config 00:39:06.975 net/avp: not in enabled drivers build config 00:39:06.975 net/axgbe: not in enabled drivers build config 00:39:06.975 net/bnx2x: not in enabled drivers build config 00:39:06.975 net/bnxt: not in enabled drivers build config 00:39:06.975 net/bonding: not in enabled drivers build config 00:39:06.975 net/cnxk: not in enabled drivers build config 00:39:06.975 net/cpfl: not in enabled drivers build config 00:39:06.975 net/cxgbe: not in enabled drivers build config 00:39:06.975 net/dpaa: not in enabled drivers build config 00:39:06.975 net/dpaa2: not in enabled drivers build config 00:39:06.975 net/e1000: not in enabled drivers build config 00:39:06.975 net/ena: not in enabled drivers build config 00:39:06.975 net/enetc: not in enabled drivers build config 00:39:06.975 net/enetfec: not in enabled drivers build config 00:39:06.975 net/enic: not in enabled drivers build config 00:39:06.975 net/failsafe: not in enabled drivers build config 00:39:06.975 net/fm10k: not in enabled drivers build config 00:39:06.975 net/gve: not in enabled drivers build config 00:39:06.975 net/hinic: not in enabled drivers build config 00:39:06.975 net/hns3: not in enabled drivers build config 00:39:06.975 net/i40e: not in enabled drivers build config 00:39:06.975 net/iavf: not in enabled drivers build config 00:39:06.975 net/ice: not in enabled drivers build config 00:39:06.976 net/idpf: not in enabled drivers build config 00:39:06.976 net/igc: not in enabled drivers build config 00:39:06.976 net/ionic: not in enabled drivers build config 00:39:06.976 net/ipn3ke: not in enabled drivers build config 00:39:06.976 net/ixgbe: not in enabled drivers build config 00:39:06.976 net/mana: not in enabled drivers build config 00:39:06.976 net/memif: not in enabled drivers build config 00:39:06.976 net/mlx4: not in enabled drivers build config 00:39:06.976 net/mlx5: not in enabled drivers build config 00:39:06.976 net/mvneta: not in enabled drivers build config 00:39:06.976 net/mvpp2: not in enabled drivers build config 00:39:06.976 net/netvsc: not in enabled drivers build config 00:39:06.976 net/nfb: not in enabled drivers build config 00:39:06.976 net/nfp: not in enabled drivers build config 00:39:06.976 net/ngbe: not in enabled drivers build config 00:39:06.976 net/null: not in enabled drivers build config 00:39:06.976 net/octeontx: not in enabled drivers build config 00:39:06.976 net/octeon_ep: not in enabled drivers build config 00:39:06.976 net/pcap: not in enabled drivers build config 00:39:06.976 net/pfe: not in enabled drivers build config 00:39:06.976 net/qede: not in enabled drivers build config 00:39:06.976 net/ring: not in enabled drivers build config 00:39:06.976 net/sfc: not in enabled drivers build config 00:39:06.976 net/softnic: not in enabled drivers build config 00:39:06.976 net/tap: not in enabled drivers build config 00:39:06.976 net/thunderx: not in enabled drivers build config 00:39:06.976 net/txgbe: not in enabled drivers build config 00:39:06.976 net/vdev_netvsc: not in enabled drivers build config 00:39:06.976 net/vhost: not in enabled drivers build config 00:39:06.976 net/virtio: not in enabled drivers build config 00:39:06.976 net/vmxnet3: not in enabled drivers build config 00:39:06.976 raw/*: missing internal dependency, "rawdev" 00:39:06.976 crypto/armv8: not in enabled drivers build config 00:39:06.976 crypto/bcmfs: not in enabled drivers build config 00:39:06.976 crypto/caam_jr: not in enabled drivers build config 00:39:06.976 crypto/ccp: not in enabled drivers build config 00:39:06.976 crypto/cnxk: not in enabled drivers build config 00:39:06.976 crypto/dpaa_sec: not in enabled drivers build config 00:39:06.976 crypto/dpaa2_sec: not in enabled drivers build config 00:39:06.976 crypto/ipsec_mb: not in enabled drivers build config 00:39:06.976 crypto/mlx5: not in enabled drivers build config 00:39:06.976 crypto/mvsam: not in enabled drivers build config 00:39:06.976 crypto/nitrox: not in enabled drivers build config 00:39:06.976 crypto/null: not in enabled drivers build config 00:39:06.976 crypto/octeontx: not in enabled drivers build config 00:39:06.976 crypto/openssl: not in enabled drivers build config 00:39:06.976 crypto/scheduler: not in enabled drivers build config 00:39:06.976 crypto/uadk: not in enabled drivers build config 00:39:06.976 crypto/virtio: not in enabled drivers build config 00:39:06.976 compress/isal: not in enabled drivers build config 00:39:06.976 compress/mlx5: not in enabled drivers build config 00:39:06.976 compress/nitrox: not in enabled drivers build config 00:39:06.976 compress/octeontx: not in enabled drivers build config 00:39:06.976 compress/zlib: not in enabled drivers build config 00:39:06.976 regex/*: missing internal dependency, "regexdev" 00:39:06.976 ml/*: missing internal dependency, "mldev" 00:39:06.976 vdpa/ifc: not in enabled drivers build config 00:39:06.976 vdpa/mlx5: not in enabled drivers build config 00:39:06.976 vdpa/nfp: not in enabled drivers build config 00:39:06.976 vdpa/sfc: not in enabled drivers build config 00:39:06.976 event/*: missing internal dependency, "eventdev" 00:39:06.976 baseband/*: missing internal dependency, "bbdev" 00:39:06.976 gpu/*: missing internal dependency, "gpudev" 00:39:06.976 00:39:06.976 00:39:06.976 Build targets in project: 85 00:39:06.976 00:39:06.976 DPDK 24.03.0 00:39:06.976 00:39:06.976 User defined options 00:39:06.976 buildtype : debug 00:39:06.976 default_library : shared 00:39:06.976 libdir : lib 00:39:06.976 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:39:06.976 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:39:06.976 c_link_args : 00:39:06.976 cpu_instruction_set: native 00:39:06.976 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:39:06.976 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:39:06.976 enable_docs : false 00:39:06.976 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:39:06.976 enable_kmods : false 00:39:06.976 max_lcores : 128 00:39:06.976 tests : false 00:39:06.976 00:39:06.976 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:39:06.976 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:39:06.976 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:39:06.976 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:39:06.976 [3/268] Linking static target lib/librte_log.a 00:39:06.976 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:39:06.976 [5/268] Linking static target lib/librte_kvargs.a 00:39:06.976 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:39:06.976 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:39:06.976 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:39:06.976 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:39:06.977 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:39:06.977 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:39:06.977 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:39:06.977 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:39:06.977 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:39:06.977 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:39:06.977 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:39:06.977 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:39:06.977 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:39:06.977 [19/268] Linking static target lib/librte_telemetry.a 00:39:06.977 [20/268] Linking target lib/librte_log.so.24.1 00:39:07.236 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:39:07.495 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:39:07.495 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:39:07.495 [24/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:39:07.495 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:39:07.495 [26/268] Linking target lib/librte_kvargs.so.24.1 00:39:07.753 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:39:07.753 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:39:08.012 [29/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:39:08.012 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:39:08.012 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:39:08.012 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:39:08.012 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:39:08.271 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:39:08.271 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:39:08.271 [36/268] Linking target lib/librte_telemetry.so.24.1 00:39:08.271 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:39:08.271 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:39:08.529 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:39:08.529 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:39:08.529 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:39:08.529 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:39:08.788 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:39:08.788 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:39:08.788 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:39:09.047 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:39:09.047 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:39:09.047 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:39:09.306 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:39:09.306 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:39:09.306 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:39:09.564 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:39:09.564 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:39:09.823 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:39:09.823 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:39:09.823 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:39:10.082 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:39:10.082 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:39:10.082 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:39:10.082 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:39:10.339 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:39:10.339 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:39:10.597 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:39:10.597 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:39:10.597 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:39:10.597 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:39:11.163 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:39:11.163 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:39:11.163 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:39:11.163 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:39:11.421 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:39:11.421 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:39:11.421 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:39:11.680 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:39:11.680 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:39:11.680 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:39:11.680 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:39:11.939 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:39:11.939 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:39:12.196 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:39:12.196 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:39:12.453 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:39:12.453 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:39:12.453 [84/268] Linking static target lib/librte_ring.a 00:39:12.712 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:39:12.712 [86/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:39:12.712 [87/268] Linking static target lib/librte_rcu.a 00:39:12.712 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:39:12.712 [89/268] Linking static target lib/librte_eal.a 00:39:12.970 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:39:13.229 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:39:13.229 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:39:13.229 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:39:13.487 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:39:13.487 [95/268] Linking static target lib/librte_mempool.a 00:39:13.745 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:39:13.745 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:39:13.745 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:39:14.003 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:39:14.261 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:39:14.261 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:39:14.261 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:39:14.261 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:39:14.520 [104/268] Linking static target lib/librte_mbuf.a 00:39:14.520 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:39:14.777 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:39:14.777 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:39:15.035 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:39:15.035 [109/268] Linking static target lib/librte_meter.a 00:39:15.035 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:39:15.035 [111/268] Linking static target lib/librte_net.a 00:39:15.035 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:39:15.601 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:39:15.601 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:39:15.858 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:39:15.858 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:39:15.858 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:39:15.858 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:39:15.858 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:39:16.116 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:39:16.374 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:39:16.938 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:39:16.938 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:39:16.938 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:39:16.938 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:39:17.195 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:39:17.195 [127/268] Linking static target lib/librte_pci.a 00:39:17.195 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:39:17.195 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:39:17.195 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:39:17.453 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:39:17.453 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:39:17.453 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:39:17.453 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:39:17.453 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:39:17.453 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:39:17.453 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:39:17.712 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:39:17.712 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:39:17.712 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:39:17.712 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:39:17.712 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:39:17.712 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:39:17.712 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:39:17.712 [145/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:39:17.712 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:39:17.971 [147/268] Linking static target lib/librte_cmdline.a 00:39:18.229 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:39:18.486 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:39:18.486 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:39:18.486 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:39:18.486 [152/268] Linking static target lib/librte_ethdev.a 00:39:18.486 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:39:18.745 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:39:18.745 [155/268] Linking static target lib/librte_hash.a 00:39:18.745 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:39:18.745 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:39:18.745 [158/268] Linking static target lib/librte_timer.a 00:39:19.003 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:39:19.003 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:39:19.571 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:39:19.571 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:39:19.571 [163/268] Linking static target lib/librte_compressdev.a 00:39:19.571 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:39:19.830 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:39:19.830 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:39:19.830 [167/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:39:19.830 [168/268] Linking static target lib/librte_cryptodev.a 00:39:19.830 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:39:19.830 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:39:20.090 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:39:20.090 [172/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:39:20.090 [173/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:39:20.090 [174/268] Linking static target lib/librte_dmadev.a 00:39:20.349 [175/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:39:20.349 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:39:20.637 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:39:20.637 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:39:20.895 [179/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:39:20.896 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:39:20.896 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:39:20.896 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:39:21.154 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:39:21.414 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:39:21.414 [185/268] Linking static target lib/librte_power.a 00:39:21.414 [186/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:39:21.674 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:39:21.674 [188/268] Linking static target lib/librte_reorder.a 00:39:21.933 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:39:21.933 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:39:22.337 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:39:22.337 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:39:22.337 [193/268] Linking static target lib/librte_security.a 00:39:22.615 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:39:22.887 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:39:23.202 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:39:23.202 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:39:23.202 [198/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:39:23.202 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:39:23.202 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:39:23.490 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:39:23.826 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:39:23.826 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:39:24.111 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:39:24.111 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:39:24.111 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:39:24.111 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:39:24.383 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:39:24.383 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:39:24.383 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:39:24.383 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:39:24.642 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:39:24.642 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:39:24.642 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:39:24.901 [215/268] Linking static target drivers/librte_bus_pci.a 00:39:24.901 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:39:24.901 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:39:24.901 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:39:24.901 [219/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:39:24.901 [220/268] Linking static target drivers/librte_bus_vdev.a 00:39:24.901 [221/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:39:24.901 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:39:25.160 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:39:25.160 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:39:25.160 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:39:25.160 [226/268] Linking static target drivers/librte_mempool_ring.a 00:39:25.419 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:39:25.419 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:39:26.797 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:39:26.797 [230/268] Linking target lib/librte_eal.so.24.1 00:39:27.056 [231/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:39:27.056 [232/268] Linking static target lib/librte_vhost.a 00:39:27.056 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:39:27.056 [234/268] Linking target lib/librte_ring.so.24.1 00:39:27.056 [235/268] Linking target lib/librte_pci.so.24.1 00:39:27.056 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:39:27.056 [237/268] Linking target lib/librte_meter.so.24.1 00:39:27.056 [238/268] Linking target lib/librte_dmadev.so.24.1 00:39:27.316 [239/268] Linking target lib/librte_timer.so.24.1 00:39:27.316 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:39:27.316 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:39:27.316 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:39:27.316 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:39:27.316 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:39:27.316 [245/268] Linking target lib/librte_rcu.so.24.1 00:39:27.316 [246/268] Linking target lib/librte_mempool.so.24.1 00:39:27.575 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:39:27.575 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:39:27.575 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:39:27.575 [250/268] Linking target lib/librte_mbuf.so.24.1 00:39:27.575 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:39:27.834 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:39:27.834 [253/268] Linking target lib/librte_reorder.so.24.1 00:39:27.834 [254/268] Linking target lib/librte_net.so.24.1 00:39:27.834 [255/268] Linking target lib/librte_compressdev.so.24.1 00:39:27.834 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:39:28.093 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:39:28.093 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:39:28.093 [259/268] Linking target lib/librte_hash.so.24.1 00:39:28.093 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:39:28.093 [261/268] Linking target lib/librte_cmdline.so.24.1 00:39:28.093 [262/268] Linking target lib/librte_security.so.24.1 00:39:28.093 [263/268] Linking target lib/librte_ethdev.so.24.1 00:39:28.351 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:39:28.351 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:39:28.351 [266/268] Linking target lib/librte_power.so.24.1 00:39:28.610 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:39:28.610 [268/268] Linking target lib/librte_vhost.so.24.1 00:39:28.610 INFO: autodetecting backend as ninja 00:39:28.610 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:40:00.695 CC lib/ut/ut.o 00:40:00.695 CC lib/ut_mock/mock.o 00:40:00.695 CC lib/log/log_flags.o 00:40:00.695 CC lib/log/log.o 00:40:00.695 CC lib/log/log_deprecated.o 00:40:00.695 LIB libspdk_ut_mock.a 00:40:00.695 SO libspdk_ut_mock.so.6.0 00:40:00.695 LIB libspdk_log.a 00:40:00.695 LIB libspdk_ut.a 00:40:00.695 SO libspdk_ut.so.2.0 00:40:00.695 SYMLINK libspdk_ut_mock.so 00:40:00.695 SO libspdk_log.so.7.1 00:40:00.695 SYMLINK libspdk_ut.so 00:40:00.695 SYMLINK libspdk_log.so 00:40:00.695 CXX lib/trace_parser/trace.o 00:40:00.695 CC lib/util/base64.o 00:40:00.695 CC lib/util/bit_array.o 00:40:00.695 CC lib/util/crc16.o 00:40:00.695 CC lib/util/cpuset.o 00:40:00.695 CC lib/util/crc32c.o 00:40:00.695 CC lib/util/crc32.o 00:40:00.695 CC lib/dma/dma.o 00:40:00.695 CC lib/ioat/ioat.o 00:40:00.695 CC lib/vfio_user/host/vfio_user_pci.o 00:40:00.695 CC lib/util/crc32_ieee.o 00:40:00.695 CC lib/util/crc64.o 00:40:00.695 CC lib/vfio_user/host/vfio_user.o 00:40:00.695 CC lib/util/dif.o 00:40:00.695 LIB libspdk_dma.a 00:40:00.695 CC lib/util/fd.o 00:40:00.695 SO libspdk_dma.so.5.0 00:40:00.695 CC lib/util/fd_group.o 00:40:00.695 CC lib/util/file.o 00:40:00.695 CC lib/util/hexlify.o 00:40:00.695 SYMLINK libspdk_dma.so 00:40:00.695 CC lib/util/iov.o 00:40:00.695 LIB libspdk_ioat.a 00:40:00.695 CC lib/util/math.o 00:40:00.695 SO libspdk_ioat.so.7.0 00:40:00.695 CC lib/util/net.o 00:40:00.695 LIB libspdk_vfio_user.a 00:40:00.695 SYMLINK libspdk_ioat.so 00:40:00.695 CC lib/util/pipe.o 00:40:00.695 SO libspdk_vfio_user.so.5.0 00:40:00.695 CC lib/util/strerror_tls.o 00:40:00.695 CC lib/util/string.o 00:40:00.695 CC lib/util/uuid.o 00:40:00.695 SYMLINK libspdk_vfio_user.so 00:40:00.695 CC lib/util/xor.o 00:40:00.695 CC lib/util/zipf.o 00:40:00.695 CC lib/util/md5.o 00:40:00.695 LIB libspdk_util.a 00:40:00.695 LIB libspdk_trace_parser.a 00:40:00.695 SO libspdk_util.so.10.1 00:40:00.695 SO libspdk_trace_parser.so.6.0 00:40:00.695 SYMLINK libspdk_util.so 00:40:00.695 SYMLINK libspdk_trace_parser.so 00:40:00.695 CC lib/vmd/vmd.o 00:40:00.695 CC lib/vmd/led.o 00:40:00.695 CC lib/rdma_utils/rdma_utils.o 00:40:00.695 CC lib/json/json_parse.o 00:40:00.695 CC lib/json/json_util.o 00:40:00.695 CC lib/json/json_write.o 00:40:00.695 CC lib/idxd/idxd.o 00:40:00.695 CC lib/idxd/idxd_user.o 00:40:00.695 CC lib/conf/conf.o 00:40:00.695 CC lib/env_dpdk/env.o 00:40:00.954 CC lib/idxd/idxd_kernel.o 00:40:00.954 LIB libspdk_conf.a 00:40:00.954 CC lib/env_dpdk/memory.o 00:40:00.954 CC lib/env_dpdk/pci.o 00:40:00.954 CC lib/env_dpdk/init.o 00:40:00.954 SO libspdk_conf.so.6.0 00:40:00.954 LIB libspdk_rdma_utils.a 00:40:00.954 LIB libspdk_json.a 00:40:01.213 SO libspdk_rdma_utils.so.1.0 00:40:01.213 SYMLINK libspdk_conf.so 00:40:01.213 SO libspdk_json.so.6.0 00:40:01.213 CC lib/env_dpdk/threads.o 00:40:01.213 CC lib/env_dpdk/pci_ioat.o 00:40:01.213 SYMLINK libspdk_rdma_utils.so 00:40:01.213 CC lib/env_dpdk/pci_virtio.o 00:40:01.213 SYMLINK libspdk_json.so 00:40:01.213 CC lib/env_dpdk/pci_vmd.o 00:40:01.511 LIB libspdk_idxd.a 00:40:01.511 CC lib/env_dpdk/pci_idxd.o 00:40:01.511 CC lib/env_dpdk/pci_event.o 00:40:01.511 SO libspdk_idxd.so.12.1 00:40:01.511 CC lib/rdma_provider/common.o 00:40:01.511 SYMLINK libspdk_idxd.so 00:40:01.511 CC lib/env_dpdk/sigbus_handler.o 00:40:01.511 CC lib/env_dpdk/pci_dpdk.o 00:40:01.511 CC lib/env_dpdk/pci_dpdk_2207.o 00:40:01.511 CC lib/jsonrpc/jsonrpc_server.o 00:40:01.511 CC lib/rdma_provider/rdma_provider_verbs.o 00:40:01.511 LIB libspdk_vmd.a 00:40:01.511 CC lib/env_dpdk/pci_dpdk_2211.o 00:40:01.770 SO libspdk_vmd.so.6.0 00:40:01.770 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:40:01.770 SYMLINK libspdk_vmd.so 00:40:01.770 CC lib/jsonrpc/jsonrpc_client.o 00:40:01.770 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:40:01.770 LIB libspdk_rdma_provider.a 00:40:02.028 SO libspdk_rdma_provider.so.7.0 00:40:02.028 LIB libspdk_jsonrpc.a 00:40:02.028 SYMLINK libspdk_rdma_provider.so 00:40:02.028 SO libspdk_jsonrpc.so.6.0 00:40:02.028 SYMLINK libspdk_jsonrpc.so 00:40:02.287 LIB libspdk_env_dpdk.a 00:40:02.287 SO libspdk_env_dpdk.so.15.1 00:40:02.546 CC lib/rpc/rpc.o 00:40:02.546 SYMLINK libspdk_env_dpdk.so 00:40:02.805 LIB libspdk_rpc.a 00:40:02.805 SO libspdk_rpc.so.6.0 00:40:02.805 SYMLINK libspdk_rpc.so 00:40:03.064 CC lib/keyring/keyring.o 00:40:03.064 CC lib/keyring/keyring_rpc.o 00:40:03.064 CC lib/trace/trace.o 00:40:03.064 CC lib/trace/trace_flags.o 00:40:03.064 CC lib/trace/trace_rpc.o 00:40:03.323 CC lib/notify/notify.o 00:40:03.323 CC lib/notify/notify_rpc.o 00:40:03.323 LIB libspdk_notify.a 00:40:03.323 LIB libspdk_keyring.a 00:40:03.323 SO libspdk_notify.so.6.0 00:40:03.323 SO libspdk_keyring.so.2.0 00:40:03.581 LIB libspdk_trace.a 00:40:03.581 SYMLINK libspdk_notify.so 00:40:03.581 SYMLINK libspdk_keyring.so 00:40:03.581 SO libspdk_trace.so.11.0 00:40:03.581 SYMLINK libspdk_trace.so 00:40:03.839 CC lib/thread/iobuf.o 00:40:03.839 CC lib/thread/thread.o 00:40:03.839 CC lib/sock/sock.o 00:40:03.839 CC lib/sock/sock_rpc.o 00:40:04.406 LIB libspdk_sock.a 00:40:04.406 SO libspdk_sock.so.10.0 00:40:04.406 SYMLINK libspdk_sock.so 00:40:04.664 CC lib/nvme/nvme_ctrlr_cmd.o 00:40:04.664 CC lib/nvme/nvme_ctrlr.o 00:40:04.664 CC lib/nvme/nvme_fabric.o 00:40:04.664 CC lib/nvme/nvme_ns_cmd.o 00:40:04.664 CC lib/nvme/nvme_ns.o 00:40:04.664 CC lib/nvme/nvme_pcie_common.o 00:40:04.664 CC lib/nvme/nvme_pcie.o 00:40:04.664 CC lib/nvme/nvme_qpair.o 00:40:04.664 CC lib/nvme/nvme.o 00:40:06.041 CC lib/nvme/nvme_quirks.o 00:40:06.041 CC lib/nvme/nvme_transport.o 00:40:06.041 CC lib/nvme/nvme_discovery.o 00:40:06.041 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:40:06.041 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:40:06.041 CC lib/nvme/nvme_tcp.o 00:40:06.041 CC lib/nvme/nvme_opal.o 00:40:06.300 LIB libspdk_thread.a 00:40:06.300 SO libspdk_thread.so.11.0 00:40:06.559 CC lib/nvme/nvme_io_msg.o 00:40:06.559 SYMLINK libspdk_thread.so 00:40:06.559 CC lib/nvme/nvme_poll_group.o 00:40:06.818 CC lib/nvme/nvme_zns.o 00:40:06.819 CC lib/nvme/nvme_stubs.o 00:40:07.077 CC lib/nvme/nvme_auth.o 00:40:07.077 CC lib/nvme/nvme_cuse.o 00:40:07.077 CC lib/accel/accel.o 00:40:07.335 CC lib/accel/accel_rpc.o 00:40:07.904 CC lib/blob/blobstore.o 00:40:07.904 CC lib/blob/request.o 00:40:07.904 CC lib/init/json_config.o 00:40:07.904 CC lib/accel/accel_sw.o 00:40:07.904 CC lib/virtio/virtio.o 00:40:08.491 CC lib/init/subsystem.o 00:40:08.491 CC lib/virtio/virtio_vhost_user.o 00:40:08.491 CC lib/blob/zeroes.o 00:40:08.491 CC lib/blob/blob_bs_dev.o 00:40:08.491 CC lib/virtio/virtio_vfio_user.o 00:40:08.491 CC lib/virtio/virtio_pci.o 00:40:08.750 CC lib/nvme/nvme_rdma.o 00:40:08.750 CC lib/init/subsystem_rpc.o 00:40:08.750 CC lib/init/rpc.o 00:40:09.009 CC lib/fsdev/fsdev.o 00:40:09.009 CC lib/fsdev/fsdev_io.o 00:40:09.009 CC lib/fsdev/fsdev_rpc.o 00:40:09.009 LIB libspdk_accel.a 00:40:09.009 LIB libspdk_init.a 00:40:09.009 LIB libspdk_virtio.a 00:40:09.009 SO libspdk_accel.so.16.0 00:40:09.009 SO libspdk_init.so.6.0 00:40:09.268 SO libspdk_virtio.so.7.0 00:40:09.268 SYMLINK libspdk_accel.so 00:40:09.268 SYMLINK libspdk_init.so 00:40:09.268 SYMLINK libspdk_virtio.so 00:40:09.526 CC lib/bdev/bdev.o 00:40:09.526 CC lib/bdev/bdev_rpc.o 00:40:09.526 CC lib/bdev/part.o 00:40:09.526 CC lib/bdev/bdev_zone.o 00:40:09.526 CC lib/bdev/scsi_nvme.o 00:40:09.526 CC lib/event/app.o 00:40:09.526 CC lib/event/reactor.o 00:40:09.785 LIB libspdk_fsdev.a 00:40:09.785 SO libspdk_fsdev.so.2.0 00:40:09.785 CC lib/event/log_rpc.o 00:40:09.785 CC lib/event/app_rpc.o 00:40:09.785 CC lib/event/scheduler_static.o 00:40:09.785 SYMLINK libspdk_fsdev.so 00:40:10.109 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:40:10.109 LIB libspdk_event.a 00:40:10.396 SO libspdk_event.so.14.0 00:40:10.396 SYMLINK libspdk_event.so 00:40:10.970 LIB libspdk_nvme.a 00:40:10.970 LIB libspdk_fuse_dispatcher.a 00:40:11.229 SO libspdk_fuse_dispatcher.so.1.0 00:40:11.229 SYMLINK libspdk_fuse_dispatcher.so 00:40:11.229 SO libspdk_nvme.so.15.0 00:40:11.489 SYMLINK libspdk_nvme.so 00:40:12.428 LIB libspdk_blob.a 00:40:12.428 SO libspdk_blob.so.12.0 00:40:12.428 SYMLINK libspdk_blob.so 00:40:12.686 CC lib/blobfs/blobfs.o 00:40:12.686 CC lib/blobfs/tree.o 00:40:12.686 CC lib/lvol/lvol.o 00:40:12.977 LIB libspdk_bdev.a 00:40:12.977 SO libspdk_bdev.so.17.0 00:40:13.235 SYMLINK libspdk_bdev.so 00:40:13.495 CC lib/ftl/ftl_core.o 00:40:13.495 CC lib/ftl/ftl_layout.o 00:40:13.495 CC lib/ftl/ftl_init.o 00:40:13.495 CC lib/ftl/ftl_debug.o 00:40:13.495 CC lib/nbd/nbd.o 00:40:13.495 CC lib/nvmf/ctrlr.o 00:40:13.495 CC lib/scsi/dev.o 00:40:13.495 CC lib/ublk/ublk.o 00:40:13.754 CC lib/ublk/ublk_rpc.o 00:40:13.754 CC lib/scsi/lun.o 00:40:13.754 LIB libspdk_lvol.a 00:40:13.754 CC lib/ftl/ftl_io.o 00:40:13.754 CC lib/nbd/nbd_rpc.o 00:40:13.754 SO libspdk_lvol.so.11.0 00:40:13.754 CC lib/ftl/ftl_sb.o 00:40:13.754 LIB libspdk_blobfs.a 00:40:13.754 SO libspdk_blobfs.so.11.0 00:40:14.013 CC lib/ftl/ftl_l2p.o 00:40:14.013 SYMLINK libspdk_lvol.so 00:40:14.013 CC lib/ftl/ftl_l2p_flat.o 00:40:14.013 SYMLINK libspdk_blobfs.so 00:40:14.013 CC lib/ftl/ftl_nv_cache.o 00:40:14.013 LIB libspdk_nbd.a 00:40:14.013 CC lib/scsi/port.o 00:40:14.013 CC lib/scsi/scsi.o 00:40:14.013 SO libspdk_nbd.so.7.0 00:40:14.271 CC lib/nvmf/ctrlr_discovery.o 00:40:14.271 CC lib/nvmf/ctrlr_bdev.o 00:40:14.271 SYMLINK libspdk_nbd.so 00:40:14.271 CC lib/nvmf/subsystem.o 00:40:14.271 CC lib/ftl/ftl_band.o 00:40:14.271 CC lib/nvmf/nvmf.o 00:40:14.271 CC lib/nvmf/nvmf_rpc.o 00:40:14.271 CC lib/scsi/scsi_bdev.o 00:40:14.531 CC lib/nvmf/transport.o 00:40:14.790 LIB libspdk_ublk.a 00:40:14.790 SO libspdk_ublk.so.3.0 00:40:14.790 SYMLINK libspdk_ublk.so 00:40:14.790 CC lib/scsi/scsi_pr.o 00:40:15.049 CC lib/nvmf/tcp.o 00:40:15.049 CC lib/nvmf/stubs.o 00:40:15.308 CC lib/nvmf/mdns_server.o 00:40:15.308 CC lib/ftl/ftl_band_ops.o 00:40:15.308 CC lib/scsi/scsi_rpc.o 00:40:15.566 CC lib/scsi/task.o 00:40:15.566 CC lib/ftl/ftl_writer.o 00:40:15.566 CC lib/nvmf/rdma.o 00:40:15.566 CC lib/nvmf/auth.o 00:40:15.824 CC lib/ftl/ftl_rq.o 00:40:15.824 CC lib/ftl/ftl_reloc.o 00:40:15.824 LIB libspdk_scsi.a 00:40:16.085 CC lib/ftl/ftl_l2p_cache.o 00:40:16.085 SO libspdk_scsi.so.9.0 00:40:16.085 CC lib/ftl/ftl_p2l.o 00:40:16.085 CC lib/ftl/ftl_p2l_log.o 00:40:16.085 CC lib/ftl/mngt/ftl_mngt.o 00:40:16.085 SYMLINK libspdk_scsi.so 00:40:16.085 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:40:16.343 CC lib/iscsi/conn.o 00:40:16.343 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:40:16.602 CC lib/ftl/mngt/ftl_mngt_startup.o 00:40:16.602 CC lib/ftl/mngt/ftl_mngt_md.o 00:40:16.602 CC lib/ftl/mngt/ftl_mngt_misc.o 00:40:16.860 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:40:16.860 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:40:16.860 CC lib/vhost/vhost.o 00:40:16.860 CC lib/vhost/vhost_rpc.o 00:40:17.118 CC lib/vhost/vhost_scsi.o 00:40:17.118 CC lib/vhost/vhost_blk.o 00:40:17.118 CC lib/vhost/rte_vhost_user.o 00:40:17.118 CC lib/iscsi/init_grp.o 00:40:17.118 CC lib/ftl/mngt/ftl_mngt_band.o 00:40:17.376 CC lib/iscsi/iscsi.o 00:40:17.633 CC lib/iscsi/param.o 00:40:17.633 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:40:17.892 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:40:17.892 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:40:17.892 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:40:18.150 CC lib/ftl/utils/ftl_conf.o 00:40:18.150 CC lib/iscsi/portal_grp.o 00:40:18.407 CC lib/ftl/utils/ftl_md.o 00:40:18.407 CC lib/ftl/utils/ftl_mempool.o 00:40:18.407 CC lib/iscsi/tgt_node.o 00:40:18.665 CC lib/iscsi/iscsi_subsystem.o 00:40:18.665 CC lib/iscsi/iscsi_rpc.o 00:40:18.665 CC lib/ftl/utils/ftl_bitmap.o 00:40:18.665 CC lib/ftl/utils/ftl_property.o 00:40:18.665 CC lib/iscsi/task.o 00:40:18.923 LIB libspdk_vhost.a 00:40:18.923 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:40:18.923 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:40:18.923 SO libspdk_vhost.so.8.0 00:40:18.923 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:40:18.923 SYMLINK libspdk_vhost.so 00:40:18.923 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:40:18.923 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:40:18.923 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:40:19.182 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:40:19.182 CC lib/ftl/upgrade/ftl_sb_v3.o 00:40:19.182 CC lib/ftl/upgrade/ftl_sb_v5.o 00:40:19.182 CC lib/ftl/nvc/ftl_nvc_dev.o 00:40:19.182 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:40:19.182 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:40:19.182 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:40:19.182 CC lib/ftl/base/ftl_base_dev.o 00:40:19.440 CC lib/ftl/base/ftl_base_bdev.o 00:40:19.440 CC lib/ftl/ftl_trace.o 00:40:19.440 LIB libspdk_iscsi.a 00:40:19.440 SO libspdk_iscsi.so.8.0 00:40:19.699 LIB libspdk_ftl.a 00:40:19.699 SYMLINK libspdk_iscsi.so 00:40:19.958 SO libspdk_ftl.so.9.0 00:40:19.958 LIB libspdk_nvmf.a 00:40:20.217 SO libspdk_nvmf.so.20.0 00:40:20.217 SYMLINK libspdk_ftl.so 00:40:20.476 SYMLINK libspdk_nvmf.so 00:40:20.734 CC module/env_dpdk/env_dpdk_rpc.o 00:40:20.734 CC module/keyring/file/keyring.o 00:40:20.734 CC module/blob/bdev/blob_bdev.o 00:40:20.734 CC module/scheduler/gscheduler/gscheduler.o 00:40:20.734 CC module/accel/error/accel_error.o 00:40:20.734 CC module/fsdev/aio/fsdev_aio.o 00:40:20.734 CC module/scheduler/dynamic/scheduler_dynamic.o 00:40:20.734 CC module/sock/posix/posix.o 00:40:20.734 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:40:20.734 CC module/accel/ioat/accel_ioat.o 00:40:20.992 LIB libspdk_env_dpdk_rpc.a 00:40:20.992 SO libspdk_env_dpdk_rpc.so.6.0 00:40:20.992 LIB libspdk_scheduler_gscheduler.a 00:40:20.992 SYMLINK libspdk_env_dpdk_rpc.so 00:40:20.992 CC module/accel/ioat/accel_ioat_rpc.o 00:40:20.992 SO libspdk_scheduler_gscheduler.so.4.0 00:40:20.992 LIB libspdk_scheduler_dpdk_governor.a 00:40:20.992 SO libspdk_scheduler_dpdk_governor.so.4.0 00:40:20.992 CC module/keyring/file/keyring_rpc.o 00:40:20.992 SYMLINK libspdk_scheduler_gscheduler.so 00:40:20.992 CC module/accel/error/accel_error_rpc.o 00:40:20.992 LIB libspdk_blob_bdev.a 00:40:20.992 SYMLINK libspdk_scheduler_dpdk_governor.so 00:40:20.992 SO libspdk_blob_bdev.so.12.0 00:40:20.992 CC module/fsdev/aio/fsdev_aio_rpc.o 00:40:21.249 SYMLINK libspdk_blob_bdev.so 00:40:21.249 CC module/fsdev/aio/linux_aio_mgr.o 00:40:21.249 LIB libspdk_accel_ioat.a 00:40:21.249 LIB libspdk_scheduler_dynamic.a 00:40:21.249 SO libspdk_scheduler_dynamic.so.4.0 00:40:21.249 SO libspdk_accel_ioat.so.6.0 00:40:21.249 LIB libspdk_accel_error.a 00:40:21.249 LIB libspdk_keyring_file.a 00:40:21.249 SYMLINK libspdk_scheduler_dynamic.so 00:40:21.249 SO libspdk_accel_error.so.2.0 00:40:21.249 SYMLINK libspdk_accel_ioat.so 00:40:21.249 SO libspdk_keyring_file.so.2.0 00:40:21.507 CC module/accel/dsa/accel_dsa.o 00:40:21.507 CC module/accel/dsa/accel_dsa_rpc.o 00:40:21.507 CC module/accel/iaa/accel_iaa.o 00:40:21.507 SYMLINK libspdk_accel_error.so 00:40:21.507 CC module/accel/iaa/accel_iaa_rpc.o 00:40:21.507 SYMLINK libspdk_keyring_file.so 00:40:21.507 CC module/keyring/linux/keyring.o 00:40:21.765 CC module/keyring/linux/keyring_rpc.o 00:40:21.765 LIB libspdk_accel_iaa.a 00:40:21.765 CC module/bdev/delay/vbdev_delay.o 00:40:21.765 LIB libspdk_accel_dsa.a 00:40:21.765 CC module/blobfs/bdev/blobfs_bdev.o 00:40:21.765 SO libspdk_accel_iaa.so.3.0 00:40:21.765 LIB libspdk_fsdev_aio.a 00:40:21.765 CC module/bdev/error/vbdev_error.o 00:40:21.765 SO libspdk_accel_dsa.so.5.0 00:40:21.765 LIB libspdk_sock_posix.a 00:40:21.765 SO libspdk_fsdev_aio.so.1.0 00:40:21.765 SYMLINK libspdk_accel_iaa.so 00:40:22.023 CC module/bdev/error/vbdev_error_rpc.o 00:40:22.023 SO libspdk_sock_posix.so.6.0 00:40:22.023 LIB libspdk_keyring_linux.a 00:40:22.023 CC module/bdev/gpt/gpt.o 00:40:22.023 SYMLINK libspdk_accel_dsa.so 00:40:22.023 CC module/bdev/delay/vbdev_delay_rpc.o 00:40:22.023 CC module/bdev/gpt/vbdev_gpt.o 00:40:22.023 SO libspdk_keyring_linux.so.1.0 00:40:22.023 SYMLINK libspdk_fsdev_aio.so 00:40:22.023 SYMLINK libspdk_sock_posix.so 00:40:22.023 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:40:22.023 SYMLINK libspdk_keyring_linux.so 00:40:22.281 CC module/bdev/malloc/bdev_malloc.o 00:40:22.281 LIB libspdk_blobfs_bdev.a 00:40:22.281 CC module/bdev/lvol/vbdev_lvol.o 00:40:22.281 SO libspdk_blobfs_bdev.so.6.0 00:40:22.281 LIB libspdk_bdev_delay.a 00:40:22.281 CC module/bdev/null/bdev_null.o 00:40:22.281 SYMLINK libspdk_blobfs_bdev.so 00:40:22.281 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:40:22.281 CC module/bdev/nvme/bdev_nvme.o 00:40:22.281 CC module/bdev/passthru/vbdev_passthru.o 00:40:22.281 LIB libspdk_bdev_error.a 00:40:22.281 LIB libspdk_bdev_gpt.a 00:40:22.281 SO libspdk_bdev_delay.so.6.0 00:40:22.281 SO libspdk_bdev_error.so.6.0 00:40:22.281 CC module/bdev/raid/bdev_raid.o 00:40:22.281 SO libspdk_bdev_gpt.so.6.0 00:40:22.281 SYMLINK libspdk_bdev_delay.so 00:40:22.282 CC module/bdev/raid/bdev_raid_rpc.o 00:40:22.282 SYMLINK libspdk_bdev_error.so 00:40:22.282 CC module/bdev/nvme/bdev_nvme_rpc.o 00:40:22.540 SYMLINK libspdk_bdev_gpt.so 00:40:22.540 CC module/bdev/nvme/nvme_rpc.o 00:40:22.540 CC module/bdev/malloc/bdev_malloc_rpc.o 00:40:22.540 CC module/bdev/null/bdev_null_rpc.o 00:40:22.540 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:40:22.540 CC module/bdev/raid/bdev_raid_sb.o 00:40:22.540 CC module/bdev/raid/raid0.o 00:40:22.540 CC module/bdev/nvme/bdev_mdns_client.o 00:40:22.799 LIB libspdk_bdev_null.a 00:40:22.799 LIB libspdk_bdev_malloc.a 00:40:22.799 SO libspdk_bdev_null.so.6.0 00:40:22.799 SO libspdk_bdev_malloc.so.6.0 00:40:22.799 LIB libspdk_bdev_passthru.a 00:40:22.799 SO libspdk_bdev_passthru.so.6.0 00:40:22.799 SYMLINK libspdk_bdev_null.so 00:40:22.799 SYMLINK libspdk_bdev_malloc.so 00:40:22.799 CC module/bdev/nvme/vbdev_opal.o 00:40:22.799 SYMLINK libspdk_bdev_passthru.so 00:40:22.799 CC module/bdev/raid/raid1.o 00:40:23.058 CC module/bdev/raid/concat.o 00:40:23.058 CC module/bdev/nvme/vbdev_opal_rpc.o 00:40:23.058 CC module/bdev/split/vbdev_split.o 00:40:23.058 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:40:23.058 CC module/bdev/zone_block/vbdev_zone_block.o 00:40:23.058 LIB libspdk_bdev_lvol.a 00:40:23.058 SO libspdk_bdev_lvol.so.6.0 00:40:23.058 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:40:23.058 CC module/bdev/split/vbdev_split_rpc.o 00:40:23.718 SYMLINK libspdk_bdev_lvol.so 00:40:23.718 LIB libspdk_bdev_split.a 00:40:23.718 LIB libspdk_bdev_raid.a 00:40:23.718 SO libspdk_bdev_split.so.6.0 00:40:23.718 CC module/bdev/aio/bdev_aio.o 00:40:23.718 CC module/bdev/aio/bdev_aio_rpc.o 00:40:23.718 CC module/bdev/ftl/bdev_ftl.o 00:40:23.718 CC module/bdev/ftl/bdev_ftl_rpc.o 00:40:23.718 CC module/bdev/iscsi/bdev_iscsi.o 00:40:23.718 SO libspdk_bdev_raid.so.6.0 00:40:23.718 LIB libspdk_bdev_zone_block.a 00:40:23.718 CC module/bdev/virtio/bdev_virtio_scsi.o 00:40:23.718 SYMLINK libspdk_bdev_split.so 00:40:23.718 SO libspdk_bdev_zone_block.so.6.0 00:40:23.718 CC module/bdev/virtio/bdev_virtio_blk.o 00:40:23.718 SYMLINK libspdk_bdev_raid.so 00:40:23.718 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:40:23.718 SYMLINK libspdk_bdev_zone_block.so 00:40:23.718 CC module/bdev/virtio/bdev_virtio_rpc.o 00:40:23.976 LIB libspdk_bdev_ftl.a 00:40:23.976 SO libspdk_bdev_ftl.so.6.0 00:40:23.976 LIB libspdk_bdev_aio.a 00:40:23.976 LIB libspdk_bdev_iscsi.a 00:40:23.976 SYMLINK libspdk_bdev_ftl.so 00:40:23.976 SO libspdk_bdev_aio.so.6.0 00:40:23.976 SO libspdk_bdev_iscsi.so.6.0 00:40:24.236 SYMLINK libspdk_bdev_aio.so 00:40:24.236 SYMLINK libspdk_bdev_iscsi.so 00:40:24.236 LIB libspdk_bdev_virtio.a 00:40:24.236 SO libspdk_bdev_virtio.so.6.0 00:40:24.495 SYMLINK libspdk_bdev_virtio.so 00:40:25.434 LIB libspdk_bdev_nvme.a 00:40:25.434 SO libspdk_bdev_nvme.so.7.1 00:40:25.434 SYMLINK libspdk_bdev_nvme.so 00:40:26.002 CC module/event/subsystems/iobuf/iobuf.o 00:40:26.002 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:40:26.002 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:40:26.002 CC module/event/subsystems/fsdev/fsdev.o 00:40:26.002 CC module/event/subsystems/vmd/vmd.o 00:40:26.002 CC module/event/subsystems/vmd/vmd_rpc.o 00:40:26.002 CC module/event/subsystems/keyring/keyring.o 00:40:26.002 CC module/event/subsystems/scheduler/scheduler.o 00:40:26.002 CC module/event/subsystems/sock/sock.o 00:40:26.261 LIB libspdk_event_vhost_blk.a 00:40:26.261 LIB libspdk_event_keyring.a 00:40:26.261 LIB libspdk_event_fsdev.a 00:40:26.261 LIB libspdk_event_sock.a 00:40:26.261 LIB libspdk_event_scheduler.a 00:40:26.261 LIB libspdk_event_vmd.a 00:40:26.261 LIB libspdk_event_iobuf.a 00:40:26.261 SO libspdk_event_vhost_blk.so.3.0 00:40:26.261 SO libspdk_event_keyring.so.1.0 00:40:26.261 SO libspdk_event_fsdev.so.1.0 00:40:26.261 SO libspdk_event_sock.so.5.0 00:40:26.261 SO libspdk_event_vmd.so.6.0 00:40:26.261 SO libspdk_event_scheduler.so.4.0 00:40:26.261 SO libspdk_event_iobuf.so.3.0 00:40:26.261 SYMLINK libspdk_event_fsdev.so 00:40:26.261 SYMLINK libspdk_event_vhost_blk.so 00:40:26.261 SYMLINK libspdk_event_keyring.so 00:40:26.261 SYMLINK libspdk_event_iobuf.so 00:40:26.261 SYMLINK libspdk_event_sock.so 00:40:26.261 SYMLINK libspdk_event_vmd.so 00:40:26.261 SYMLINK libspdk_event_scheduler.so 00:40:26.519 CC module/event/subsystems/accel/accel.o 00:40:26.778 LIB libspdk_event_accel.a 00:40:26.778 SO libspdk_event_accel.so.6.0 00:40:26.778 SYMLINK libspdk_event_accel.so 00:40:27.345 CC module/event/subsystems/bdev/bdev.o 00:40:27.345 LIB libspdk_event_bdev.a 00:40:27.345 SO libspdk_event_bdev.so.6.0 00:40:27.603 SYMLINK libspdk_event_bdev.so 00:40:27.862 CC module/event/subsystems/nbd/nbd.o 00:40:27.862 CC module/event/subsystems/scsi/scsi.o 00:40:27.862 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:40:27.862 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:40:27.862 CC module/event/subsystems/ublk/ublk.o 00:40:27.862 LIB libspdk_event_nbd.a 00:40:28.123 LIB libspdk_event_scsi.a 00:40:28.123 SO libspdk_event_nbd.so.6.0 00:40:28.123 SO libspdk_event_scsi.so.6.0 00:40:28.123 LIB libspdk_event_ublk.a 00:40:28.123 SO libspdk_event_ublk.so.3.0 00:40:28.123 SYMLINK libspdk_event_nbd.so 00:40:28.123 SYMLINK libspdk_event_scsi.so 00:40:28.123 LIB libspdk_event_nvmf.a 00:40:28.123 SYMLINK libspdk_event_ublk.so 00:40:28.123 SO libspdk_event_nvmf.so.6.0 00:40:28.381 SYMLINK libspdk_event_nvmf.so 00:40:28.381 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:40:28.381 CC module/event/subsystems/iscsi/iscsi.o 00:40:28.639 LIB libspdk_event_vhost_scsi.a 00:40:28.639 SO libspdk_event_vhost_scsi.so.3.0 00:40:28.639 LIB libspdk_event_iscsi.a 00:40:28.639 SYMLINK libspdk_event_vhost_scsi.so 00:40:28.639 SO libspdk_event_iscsi.so.6.0 00:40:28.639 SYMLINK libspdk_event_iscsi.so 00:40:28.897 SO libspdk.so.6.0 00:40:28.897 SYMLINK libspdk.so 00:40:29.156 TEST_HEADER include/spdk/accel.h 00:40:29.156 TEST_HEADER include/spdk/accel_module.h 00:40:29.156 TEST_HEADER include/spdk/assert.h 00:40:29.156 TEST_HEADER include/spdk/barrier.h 00:40:29.156 TEST_HEADER include/spdk/base64.h 00:40:29.156 TEST_HEADER include/spdk/bdev.h 00:40:29.156 TEST_HEADER include/spdk/bdev_module.h 00:40:29.156 TEST_HEADER include/spdk/bdev_zone.h 00:40:29.156 TEST_HEADER include/spdk/bit_array.h 00:40:29.156 TEST_HEADER include/spdk/bit_pool.h 00:40:29.156 CXX app/trace/trace.o 00:40:29.156 TEST_HEADER include/spdk/blob_bdev.h 00:40:29.156 TEST_HEADER include/spdk/blobfs_bdev.h 00:40:29.156 TEST_HEADER include/spdk/blobfs.h 00:40:29.156 TEST_HEADER include/spdk/blob.h 00:40:29.156 TEST_HEADER include/spdk/conf.h 00:40:29.156 TEST_HEADER include/spdk/config.h 00:40:29.156 TEST_HEADER include/spdk/cpuset.h 00:40:29.156 TEST_HEADER include/spdk/crc16.h 00:40:29.156 TEST_HEADER include/spdk/crc32.h 00:40:29.156 TEST_HEADER include/spdk/crc64.h 00:40:29.156 TEST_HEADER include/spdk/dif.h 00:40:29.156 TEST_HEADER include/spdk/dma.h 00:40:29.156 TEST_HEADER include/spdk/endian.h 00:40:29.156 TEST_HEADER include/spdk/env_dpdk.h 00:40:29.156 TEST_HEADER include/spdk/env.h 00:40:29.156 TEST_HEADER include/spdk/event.h 00:40:29.156 CC examples/interrupt_tgt/interrupt_tgt.o 00:40:29.156 TEST_HEADER include/spdk/fd_group.h 00:40:29.156 TEST_HEADER include/spdk/fd.h 00:40:29.156 TEST_HEADER include/spdk/file.h 00:40:29.156 TEST_HEADER include/spdk/fsdev.h 00:40:29.156 TEST_HEADER include/spdk/fsdev_module.h 00:40:29.156 TEST_HEADER include/spdk/ftl.h 00:40:29.156 TEST_HEADER include/spdk/fuse_dispatcher.h 00:40:29.156 TEST_HEADER include/spdk/gpt_spec.h 00:40:29.156 TEST_HEADER include/spdk/hexlify.h 00:40:29.156 TEST_HEADER include/spdk/histogram_data.h 00:40:29.156 TEST_HEADER include/spdk/idxd.h 00:40:29.415 TEST_HEADER include/spdk/idxd_spec.h 00:40:29.415 CC examples/ioat/perf/perf.o 00:40:29.415 CC examples/util/zipf/zipf.o 00:40:29.415 TEST_HEADER include/spdk/init.h 00:40:29.415 CC test/thread/poller_perf/poller_perf.o 00:40:29.415 TEST_HEADER include/spdk/ioat.h 00:40:29.415 TEST_HEADER include/spdk/ioat_spec.h 00:40:29.415 TEST_HEADER include/spdk/iscsi_spec.h 00:40:29.415 TEST_HEADER include/spdk/json.h 00:40:29.415 TEST_HEADER include/spdk/jsonrpc.h 00:40:29.415 TEST_HEADER include/spdk/keyring.h 00:40:29.415 TEST_HEADER include/spdk/keyring_module.h 00:40:29.415 TEST_HEADER include/spdk/likely.h 00:40:29.415 TEST_HEADER include/spdk/log.h 00:40:29.415 TEST_HEADER include/spdk/lvol.h 00:40:29.415 TEST_HEADER include/spdk/md5.h 00:40:29.415 TEST_HEADER include/spdk/memory.h 00:40:29.415 TEST_HEADER include/spdk/mmio.h 00:40:29.415 TEST_HEADER include/spdk/nbd.h 00:40:29.415 TEST_HEADER include/spdk/net.h 00:40:29.415 TEST_HEADER include/spdk/notify.h 00:40:29.415 TEST_HEADER include/spdk/nvme.h 00:40:29.415 TEST_HEADER include/spdk/nvme_intel.h 00:40:29.415 TEST_HEADER include/spdk/nvme_ocssd.h 00:40:29.415 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:40:29.415 CC test/dma/test_dma/test_dma.o 00:40:29.415 TEST_HEADER include/spdk/nvme_spec.h 00:40:29.415 TEST_HEADER include/spdk/nvme_zns.h 00:40:29.415 TEST_HEADER include/spdk/nvmf_cmd.h 00:40:29.415 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:40:29.415 CC test/app/bdev_svc/bdev_svc.o 00:40:29.415 TEST_HEADER include/spdk/nvmf.h 00:40:29.415 TEST_HEADER include/spdk/nvmf_spec.h 00:40:29.415 TEST_HEADER include/spdk/nvmf_transport.h 00:40:29.415 TEST_HEADER include/spdk/opal.h 00:40:29.416 TEST_HEADER include/spdk/opal_spec.h 00:40:29.416 TEST_HEADER include/spdk/pci_ids.h 00:40:29.416 TEST_HEADER include/spdk/pipe.h 00:40:29.416 TEST_HEADER include/spdk/queue.h 00:40:29.416 TEST_HEADER include/spdk/reduce.h 00:40:29.416 TEST_HEADER include/spdk/rpc.h 00:40:29.416 TEST_HEADER include/spdk/scheduler.h 00:40:29.416 TEST_HEADER include/spdk/scsi.h 00:40:29.416 TEST_HEADER include/spdk/scsi_spec.h 00:40:29.416 TEST_HEADER include/spdk/sock.h 00:40:29.416 TEST_HEADER include/spdk/stdinc.h 00:40:29.416 TEST_HEADER include/spdk/string.h 00:40:29.416 TEST_HEADER include/spdk/thread.h 00:40:29.416 TEST_HEADER include/spdk/trace.h 00:40:29.416 TEST_HEADER include/spdk/trace_parser.h 00:40:29.416 TEST_HEADER include/spdk/tree.h 00:40:29.416 TEST_HEADER include/spdk/ublk.h 00:40:29.416 TEST_HEADER include/spdk/util.h 00:40:29.416 TEST_HEADER include/spdk/uuid.h 00:40:29.416 TEST_HEADER include/spdk/version.h 00:40:29.416 TEST_HEADER include/spdk/vfio_user_pci.h 00:40:29.416 CC test/env/mem_callbacks/mem_callbacks.o 00:40:29.416 TEST_HEADER include/spdk/vfio_user_spec.h 00:40:29.416 TEST_HEADER include/spdk/vhost.h 00:40:29.416 TEST_HEADER include/spdk/vmd.h 00:40:29.416 TEST_HEADER include/spdk/xor.h 00:40:29.416 LINK zipf 00:40:29.416 TEST_HEADER include/spdk/zipf.h 00:40:29.416 CXX test/cpp_headers/accel.o 00:40:29.675 LINK poller_perf 00:40:29.675 LINK interrupt_tgt 00:40:29.675 LINK ioat_perf 00:40:29.675 LINK bdev_svc 00:40:29.675 CXX test/cpp_headers/accel_module.o 00:40:29.675 LINK spdk_trace 00:40:29.934 CC test/env/vtophys/vtophys.o 00:40:29.934 CC test/rpc_client/rpc_client_test.o 00:40:29.934 CXX test/cpp_headers/assert.o 00:40:29.934 CC examples/ioat/verify/verify.o 00:40:29.934 CC examples/thread/thread/thread_ex.o 00:40:29.934 LINK test_dma 00:40:30.192 LINK vtophys 00:40:30.192 CC app/trace_record/trace_record.o 00:40:30.192 CXX test/cpp_headers/barrier.o 00:40:30.192 LINK rpc_client_test 00:40:30.192 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:40:30.192 LINK verify 00:40:30.192 LINK mem_callbacks 00:40:30.192 CXX test/cpp_headers/base64.o 00:40:30.192 CXX test/cpp_headers/bdev.o 00:40:30.451 LINK thread 00:40:30.451 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:40:30.451 LINK spdk_trace_record 00:40:30.451 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:40:30.451 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:40:30.451 CXX test/cpp_headers/bdev_module.o 00:40:30.451 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:40:30.710 CC examples/sock/hello_world/hello_sock.o 00:40:30.710 LINK nvme_fuzz 00:40:30.710 LINK env_dpdk_post_init 00:40:30.710 CC examples/vmd/lsvmd/lsvmd.o 00:40:30.969 CC app/nvmf_tgt/nvmf_main.o 00:40:30.969 CXX test/cpp_headers/bdev_zone.o 00:40:30.969 LINK lsvmd 00:40:30.969 CC examples/idxd/perf/perf.o 00:40:30.969 LINK hello_sock 00:40:30.969 CC examples/vmd/led/led.o 00:40:31.230 LINK nvmf_tgt 00:40:31.230 LINK led 00:40:31.230 CXX test/cpp_headers/bit_array.o 00:40:31.230 CC test/env/memory/memory_ut.o 00:40:31.230 LINK vhost_fuzz 00:40:31.490 CXX test/cpp_headers/bit_pool.o 00:40:31.490 LINK idxd_perf 00:40:31.490 CC app/iscsi_tgt/iscsi_tgt.o 00:40:31.490 CC examples/fsdev/hello_world/hello_fsdev.o 00:40:31.490 CXX test/cpp_headers/blob_bdev.o 00:40:31.748 CC app/spdk_tgt/spdk_tgt.o 00:40:31.748 CC examples/accel/perf/accel_perf.o 00:40:31.748 LINK iscsi_tgt 00:40:31.748 CC examples/nvme/hello_world/hello_world.o 00:40:31.748 CC examples/blob/hello_world/hello_blob.o 00:40:32.006 LINK hello_fsdev 00:40:32.006 CXX test/cpp_headers/blobfs_bdev.o 00:40:32.006 LINK spdk_tgt 00:40:32.006 LINK hello_world 00:40:32.006 LINK hello_blob 00:40:32.264 LINK iscsi_fuzz 00:40:32.264 CXX test/cpp_headers/blobfs.o 00:40:32.264 CC examples/blob/cli/blobcli.o 00:40:32.264 LINK accel_perf 00:40:32.264 CC test/env/pci/pci_ut.o 00:40:32.522 CXX test/cpp_headers/blob.o 00:40:32.522 CXX test/cpp_headers/conf.o 00:40:32.522 CC examples/nvme/reconnect/reconnect.o 00:40:32.522 CC app/spdk_lspci/spdk_lspci.o 00:40:32.781 CC test/app/histogram_perf/histogram_perf.o 00:40:32.781 CC test/app/jsoncat/jsoncat.o 00:40:32.781 CXX test/cpp_headers/config.o 00:40:32.781 LINK pci_ut 00:40:32.781 LINK spdk_lspci 00:40:32.781 CXX test/cpp_headers/cpuset.o 00:40:33.039 LINK histogram_perf 00:40:33.039 LINK memory_ut 00:40:33.297 CXX test/cpp_headers/crc16.o 00:40:33.297 LINK jsoncat 00:40:33.297 LINK blobcli 00:40:33.297 CC examples/bdev/hello_world/hello_bdev.o 00:40:33.297 CC app/spdk_nvme_perf/perf.o 00:40:33.297 LINK reconnect 00:40:33.555 CXX test/cpp_headers/crc32.o 00:40:33.555 CC app/spdk_nvme_identify/identify.o 00:40:33.555 CC test/app/stub/stub.o 00:40:33.555 CC examples/bdev/bdevperf/bdevperf.o 00:40:33.555 CC examples/nvme/nvme_manage/nvme_manage.o 00:40:33.555 LINK hello_bdev 00:40:33.555 CC examples/nvme/arbitration/arbitration.o 00:40:33.813 CC app/spdk_nvme_discover/discovery_aer.o 00:40:33.813 CXX test/cpp_headers/crc64.o 00:40:33.813 LINK stub 00:40:33.813 CXX test/cpp_headers/dif.o 00:40:34.072 LINK spdk_nvme_discover 00:40:34.072 CC app/spdk_top/spdk_top.o 00:40:34.072 CXX test/cpp_headers/dma.o 00:40:34.375 LINK arbitration 00:40:34.375 CC test/event/event_perf/event_perf.o 00:40:34.375 LINK nvme_manage 00:40:34.633 CC app/vhost/vhost.o 00:40:34.633 CXX test/cpp_headers/endian.o 00:40:34.633 LINK event_perf 00:40:34.634 CXX test/cpp_headers/env_dpdk.o 00:40:34.892 LINK spdk_nvme_identify 00:40:34.892 LINK vhost 00:40:34.892 CC examples/nvme/hotplug/hotplug.o 00:40:34.892 LINK bdevperf 00:40:34.892 CXX test/cpp_headers/env.o 00:40:34.892 LINK spdk_nvme_perf 00:40:35.150 CC test/event/reactor/reactor.o 00:40:35.150 CC test/event/reactor_perf/reactor_perf.o 00:40:35.150 CC app/spdk_dd/spdk_dd.o 00:40:35.409 LINK hotplug 00:40:35.409 LINK reactor 00:40:35.409 CXX test/cpp_headers/event.o 00:40:35.409 LINK spdk_top 00:40:35.409 LINK reactor_perf 00:40:35.409 CC test/event/app_repeat/app_repeat.o 00:40:35.409 CC test/event/scheduler/scheduler.o 00:40:35.668 CXX test/cpp_headers/fd_group.o 00:40:35.668 CXX test/cpp_headers/fd.o 00:40:35.668 CC test/nvme/aer/aer.o 00:40:35.668 LINK app_repeat 00:40:35.926 CC examples/nvme/cmb_copy/cmb_copy.o 00:40:35.926 LINK spdk_dd 00:40:35.926 CC examples/nvme/abort/abort.o 00:40:35.926 CXX test/cpp_headers/file.o 00:40:35.926 LINK scheduler 00:40:35.926 CC app/fio/nvme/fio_plugin.o 00:40:35.926 CC test/nvme/reset/reset.o 00:40:35.926 LINK aer 00:40:36.184 CC test/nvme/sgl/sgl.o 00:40:36.184 LINK cmb_copy 00:40:36.184 CXX test/cpp_headers/fsdev.o 00:40:36.184 LINK abort 00:40:36.184 CC test/nvme/e2edp/nvme_dp.o 00:40:36.184 CXX test/cpp_headers/fsdev_module.o 00:40:36.184 LINK reset 00:40:36.441 CXX test/cpp_headers/ftl.o 00:40:36.441 CC test/accel/dif/dif.o 00:40:36.441 LINK sgl 00:40:36.441 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:40:36.441 LINK nvme_dp 00:40:36.700 LINK spdk_nvme 00:40:36.700 CC test/blobfs/mkfs/mkfs.o 00:40:36.700 CXX test/cpp_headers/fuse_dispatcher.o 00:40:36.700 CXX test/cpp_headers/gpt_spec.o 00:40:36.700 CC app/fio/bdev/fio_plugin.o 00:40:36.700 CC test/lvol/esnap/esnap.o 00:40:36.700 CC test/nvme/overhead/overhead.o 00:40:36.957 LINK mkfs 00:40:36.957 LINK pmr_persistence 00:40:36.957 CXX test/cpp_headers/hexlify.o 00:40:36.957 CC test/nvme/err_injection/err_injection.o 00:40:36.957 CC test/nvme/startup/startup.o 00:40:36.957 CXX test/cpp_headers/histogram_data.o 00:40:37.214 LINK startup 00:40:37.214 CXX test/cpp_headers/idxd.o 00:40:37.214 LINK err_injection 00:40:37.214 CXX test/cpp_headers/idxd_spec.o 00:40:37.472 LINK overhead 00:40:37.472 CC test/nvme/reserve/reserve.o 00:40:37.472 CXX test/cpp_headers/init.o 00:40:37.472 CC test/nvme/simple_copy/simple_copy.o 00:40:37.472 LINK dif 00:40:37.472 CC examples/nvmf/nvmf/nvmf.o 00:40:37.472 CXX test/cpp_headers/ioat.o 00:40:37.729 LINK spdk_bdev 00:40:37.729 LINK reserve 00:40:37.729 CXX test/cpp_headers/ioat_spec.o 00:40:37.729 CXX test/cpp_headers/iscsi_spec.o 00:40:37.729 LINK simple_copy 00:40:37.729 CXX test/cpp_headers/json.o 00:40:37.987 CXX test/cpp_headers/jsonrpc.o 00:40:37.987 CXX test/cpp_headers/keyring.o 00:40:37.987 CC test/nvme/connect_stress/connect_stress.o 00:40:37.987 CC test/nvme/boot_partition/boot_partition.o 00:40:37.987 CC test/nvme/compliance/nvme_compliance.o 00:40:37.987 CC test/nvme/fused_ordering/fused_ordering.o 00:40:37.987 LINK nvmf 00:40:37.987 CC test/bdev/bdevio/bdevio.o 00:40:38.244 CC test/nvme/doorbell_aers/doorbell_aers.o 00:40:38.244 LINK boot_partition 00:40:38.244 CXX test/cpp_headers/keyring_module.o 00:40:38.244 LINK connect_stress 00:40:38.244 CC test/nvme/fdp/fdp.o 00:40:38.244 LINK fused_ordering 00:40:38.501 CXX test/cpp_headers/likely.o 00:40:38.501 CC test/nvme/cuse/cuse.o 00:40:38.501 CXX test/cpp_headers/log.o 00:40:38.501 LINK doorbell_aers 00:40:38.501 CXX test/cpp_headers/lvol.o 00:40:38.501 CXX test/cpp_headers/md5.o 00:40:38.501 LINK nvme_compliance 00:40:38.759 CXX test/cpp_headers/memory.o 00:40:38.759 CXX test/cpp_headers/mmio.o 00:40:38.759 LINK bdevio 00:40:38.759 CXX test/cpp_headers/nbd.o 00:40:38.759 CXX test/cpp_headers/net.o 00:40:38.759 CXX test/cpp_headers/notify.o 00:40:38.759 CXX test/cpp_headers/nvme.o 00:40:38.759 CXX test/cpp_headers/nvme_intel.o 00:40:38.759 CXX test/cpp_headers/nvme_ocssd.o 00:40:38.759 CXX test/cpp_headers/nvme_ocssd_spec.o 00:40:39.016 LINK fdp 00:40:39.016 CXX test/cpp_headers/nvme_spec.o 00:40:39.016 CXX test/cpp_headers/nvme_zns.o 00:40:39.016 CXX test/cpp_headers/nvmf_cmd.o 00:40:39.016 CXX test/cpp_headers/nvmf_fc_spec.o 00:40:39.016 CXX test/cpp_headers/nvmf.o 00:40:39.016 CXX test/cpp_headers/nvmf_spec.o 00:40:39.274 CXX test/cpp_headers/nvmf_transport.o 00:40:39.274 CXX test/cpp_headers/opal.o 00:40:39.274 CXX test/cpp_headers/opal_spec.o 00:40:39.274 CXX test/cpp_headers/pci_ids.o 00:40:39.274 CXX test/cpp_headers/pipe.o 00:40:39.274 CXX test/cpp_headers/queue.o 00:40:39.274 CXX test/cpp_headers/reduce.o 00:40:39.274 CXX test/cpp_headers/rpc.o 00:40:39.274 CXX test/cpp_headers/scheduler.o 00:40:39.274 CXX test/cpp_headers/scsi.o 00:40:39.531 CXX test/cpp_headers/scsi_spec.o 00:40:39.531 CXX test/cpp_headers/sock.o 00:40:39.532 CXX test/cpp_headers/stdinc.o 00:40:39.532 CXX test/cpp_headers/string.o 00:40:39.532 CXX test/cpp_headers/thread.o 00:40:39.532 CXX test/cpp_headers/trace.o 00:40:39.532 CXX test/cpp_headers/trace_parser.o 00:40:39.532 CXX test/cpp_headers/tree.o 00:40:39.532 CXX test/cpp_headers/ublk.o 00:40:39.532 CXX test/cpp_headers/util.o 00:40:39.532 CXX test/cpp_headers/uuid.o 00:40:39.790 CXX test/cpp_headers/version.o 00:40:39.790 CXX test/cpp_headers/vfio_user_pci.o 00:40:39.790 CXX test/cpp_headers/vfio_user_spec.o 00:40:39.790 CXX test/cpp_headers/vhost.o 00:40:39.790 CXX test/cpp_headers/vmd.o 00:40:39.790 CXX test/cpp_headers/xor.o 00:40:39.790 CXX test/cpp_headers/zipf.o 00:40:40.357 LINK cuse 00:40:42.278 LINK esnap 00:40:42.850 00:40:42.850 real 1m57.975s 00:40:42.850 user 11m23.899s 00:40:42.850 sys 2m10.398s 00:40:42.850 09:51:49 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:40:42.850 09:51:49 make -- common/autotest_common.sh@10 -- $ set +x 00:40:42.850 ************************************ 00:40:42.850 END TEST make 00:40:42.850 ************************************ 00:40:42.850 09:51:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:40:42.850 09:51:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:40:42.850 09:51:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:40:42.850 09:51:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:42.850 09:51:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:40:42.850 09:51:49 -- pm/common@44 -- $ pid=5465 00:40:42.850 09:51:49 -- pm/common@50 -- $ kill -TERM 5465 00:40:42.850 09:51:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:42.850 09:51:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:40:42.850 09:51:49 -- pm/common@44 -- $ pid=5467 00:40:42.850 09:51:49 -- pm/common@50 -- $ kill -TERM 5467 00:40:42.850 09:51:49 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:40:42.850 09:51:49 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:40:42.850 09:51:49 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:42.850 09:51:49 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:42.850 09:51:49 -- common/autotest_common.sh@1711 -- # lcov --version 00:40:43.109 09:51:49 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:43.110 09:51:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:43.110 09:51:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:43.110 09:51:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:43.110 09:51:49 -- scripts/common.sh@336 -- # IFS=.-: 00:40:43.110 09:51:49 -- scripts/common.sh@336 -- # read -ra ver1 00:40:43.110 09:51:49 -- scripts/common.sh@337 -- # IFS=.-: 00:40:43.110 09:51:49 -- scripts/common.sh@337 -- # read -ra ver2 00:40:43.110 09:51:49 -- scripts/common.sh@338 -- # local 'op=<' 00:40:43.110 09:51:49 -- scripts/common.sh@340 -- # ver1_l=2 00:40:43.110 09:51:49 -- scripts/common.sh@341 -- # ver2_l=1 00:40:43.110 09:51:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:43.110 09:51:49 -- scripts/common.sh@344 -- # case "$op" in 00:40:43.110 09:51:49 -- scripts/common.sh@345 -- # : 1 00:40:43.110 09:51:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:43.110 09:51:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:43.110 09:51:49 -- scripts/common.sh@365 -- # decimal 1 00:40:43.110 09:51:49 -- scripts/common.sh@353 -- # local d=1 00:40:43.110 09:51:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:43.110 09:51:49 -- scripts/common.sh@355 -- # echo 1 00:40:43.110 09:51:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:40:43.110 09:51:49 -- scripts/common.sh@366 -- # decimal 2 00:40:43.110 09:51:49 -- scripts/common.sh@353 -- # local d=2 00:40:43.110 09:51:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:43.110 09:51:49 -- scripts/common.sh@355 -- # echo 2 00:40:43.110 09:51:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:40:43.110 09:51:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:43.110 09:51:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:43.110 09:51:49 -- scripts/common.sh@368 -- # return 0 00:40:43.110 09:51:49 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:43.110 09:51:49 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:43.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.110 --rc genhtml_branch_coverage=1 00:40:43.110 --rc genhtml_function_coverage=1 00:40:43.110 --rc genhtml_legend=1 00:40:43.110 --rc geninfo_all_blocks=1 00:40:43.110 --rc geninfo_unexecuted_blocks=1 00:40:43.110 00:40:43.110 ' 00:40:43.110 09:51:49 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:43.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.110 --rc genhtml_branch_coverage=1 00:40:43.110 --rc genhtml_function_coverage=1 00:40:43.110 --rc genhtml_legend=1 00:40:43.110 --rc geninfo_all_blocks=1 00:40:43.110 --rc geninfo_unexecuted_blocks=1 00:40:43.110 00:40:43.110 ' 00:40:43.110 09:51:49 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:43.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.110 --rc genhtml_branch_coverage=1 00:40:43.110 --rc genhtml_function_coverage=1 00:40:43.110 --rc genhtml_legend=1 00:40:43.110 --rc geninfo_all_blocks=1 00:40:43.110 --rc geninfo_unexecuted_blocks=1 00:40:43.110 00:40:43.110 ' 00:40:43.110 09:51:49 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:43.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.110 --rc genhtml_branch_coverage=1 00:40:43.110 --rc genhtml_function_coverage=1 00:40:43.110 --rc genhtml_legend=1 00:40:43.110 --rc geninfo_all_blocks=1 00:40:43.110 --rc geninfo_unexecuted_blocks=1 00:40:43.110 00:40:43.110 ' 00:40:43.110 09:51:49 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:43.110 09:51:49 -- nvmf/common.sh@7 -- # uname -s 00:40:43.110 09:51:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:43.110 09:51:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:43.110 09:51:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:43.110 09:51:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:43.110 09:51:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:43.110 09:51:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:43.110 09:51:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:43.110 09:51:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:43.110 09:51:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:43.110 09:51:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:43.110 09:51:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:40:43.110 09:51:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:40:43.110 09:51:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:43.110 09:51:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:43.110 09:51:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:43.110 09:51:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:43.110 09:51:49 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:43.110 09:51:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:40:43.110 09:51:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:43.110 09:51:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:43.110 09:51:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:43.110 09:51:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.110 09:51:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.110 09:51:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.110 09:51:49 -- paths/export.sh@5 -- # export PATH 00:40:43.110 09:51:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.110 09:51:49 -- nvmf/common.sh@51 -- # : 0 00:40:43.110 09:51:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:43.110 09:51:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:43.110 09:51:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:43.110 09:51:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:43.110 09:51:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:43.110 09:51:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:43.110 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:43.110 09:51:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:43.110 09:51:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:43.110 09:51:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:43.110 09:51:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:40:43.110 09:51:49 -- spdk/autotest.sh@32 -- # uname -s 00:40:43.110 09:51:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:40:43.110 09:51:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:40:43.110 09:51:49 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:40:43.110 09:51:49 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:40:43.110 09:51:49 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:40:43.110 09:51:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:40:43.110 09:51:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:40:43.110 09:51:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:40:43.110 09:51:50 -- spdk/autotest.sh@48 -- # udevadm_pid=56574 00:40:43.110 09:51:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:40:43.110 09:51:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:40:43.110 09:51:50 -- pm/common@17 -- # local monitor 00:40:43.110 09:51:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:40:43.110 09:51:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:40:43.110 09:51:50 -- pm/common@25 -- # sleep 1 00:40:43.110 09:51:50 -- pm/common@21 -- # date +%s 00:40:43.110 09:51:50 -- pm/common@21 -- # date +%s 00:40:43.110 09:51:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733737910 00:40:43.110 09:51:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733737910 00:40:43.110 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733737910_collect-vmstat.pm.log 00:40:43.110 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733737910_collect-cpu-load.pm.log 00:40:44.047 09:51:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:40:44.047 09:51:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:40:44.047 09:51:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:44.047 09:51:51 -- common/autotest_common.sh@10 -- # set +x 00:40:44.047 09:51:51 -- spdk/autotest.sh@59 -- # create_test_list 00:40:44.048 09:51:51 -- common/autotest_common.sh@752 -- # xtrace_disable 00:40:44.048 09:51:51 -- common/autotest_common.sh@10 -- # set +x 00:40:44.048 09:51:51 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:40:44.306 09:51:51 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:40:44.306 09:51:51 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:40:44.306 09:51:51 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:40:44.306 09:51:51 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:40:44.306 09:51:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:40:44.306 09:51:51 -- common/autotest_common.sh@1457 -- # uname 00:40:44.306 09:51:51 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:40:44.306 09:51:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:40:44.306 09:51:51 -- common/autotest_common.sh@1477 -- # uname 00:40:44.306 09:51:51 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:40:44.306 09:51:51 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:40:44.306 09:51:51 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:40:44.306 lcov: LCOV version 1.15 00:40:44.306 09:51:51 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:40:59.183 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:40:59.183 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:41:17.275 09:52:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:41:17.275 09:52:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:17.275 09:52:21 -- common/autotest_common.sh@10 -- # set +x 00:41:17.275 09:52:21 -- spdk/autotest.sh@78 -- # rm -f 00:41:17.275 09:52:21 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:41:17.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:17.275 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:41:17.275 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:41:17.275 09:52:22 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:41:17.275 09:52:22 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:41:17.275 09:52:22 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:41:17.275 09:52:22 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:41:17.275 09:52:22 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:41:17.275 09:52:22 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:41:17.275 09:52:22 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:41:17.275 09:52:22 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:41:17.275 09:52:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:41:17.275 09:52:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:41:17.275 09:52:22 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:41:17.275 09:52:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:17.275 09:52:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:17.275 09:52:22 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:41:17.275 09:52:22 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:41:17.275 09:52:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:41:17.275 09:52:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:41:17.275 09:52:22 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:41:17.275 09:52:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:41:17.275 09:52:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:17.275 09:52:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:41:17.275 09:52:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:41:17.275 09:52:22 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:41:17.275 09:52:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:41:17.275 09:52:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:17.275 09:52:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:41:17.275 09:52:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:41:17.275 09:52:22 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:41:17.275 09:52:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:41:17.275 09:52:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:17.275 09:52:22 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:41:17.275 09:52:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:41:17.275 09:52:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:41:17.275 09:52:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:41:17.275 09:52:22 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:41:17.275 09:52:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:41:17.275 No valid GPT data, bailing 00:41:17.275 09:52:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:17.275 09:52:22 -- scripts/common.sh@394 -- # pt= 00:41:17.275 09:52:22 -- scripts/common.sh@395 -- # return 1 00:41:17.275 09:52:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:41:17.275 1+0 records in 00:41:17.275 1+0 records out 00:41:17.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00598694 s, 175 MB/s 00:41:17.275 09:52:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:41:17.275 09:52:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:41:17.275 09:52:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:41:17.275 09:52:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:41:17.275 09:52:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:41:17.275 No valid GPT data, bailing 00:41:17.275 09:52:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:41:17.275 09:52:22 -- scripts/common.sh@394 -- # pt= 00:41:17.275 09:52:22 -- scripts/common.sh@395 -- # return 1 00:41:17.275 09:52:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:41:17.275 1+0 records in 00:41:17.275 1+0 records out 00:41:17.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00533731 s, 196 MB/s 00:41:17.275 09:52:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:41:17.275 09:52:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:41:17.275 09:52:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:41:17.275 09:52:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:41:17.275 09:52:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:41:17.275 No valid GPT data, bailing 00:41:17.275 09:52:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:41:17.275 09:52:23 -- scripts/common.sh@394 -- # pt= 00:41:17.275 09:52:23 -- scripts/common.sh@395 -- # return 1 00:41:17.275 09:52:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:41:17.275 1+0 records in 00:41:17.275 1+0 records out 00:41:17.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044551 s, 235 MB/s 00:41:17.275 09:52:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:41:17.275 09:52:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:41:17.275 09:52:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:41:17.275 09:52:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:41:17.275 09:52:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:41:17.275 No valid GPT data, bailing 00:41:17.275 09:52:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:41:17.275 09:52:23 -- scripts/common.sh@394 -- # pt= 00:41:17.275 09:52:23 -- scripts/common.sh@395 -- # return 1 00:41:17.275 09:52:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:41:17.275 1+0 records in 00:41:17.275 1+0 records out 00:41:17.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610955 s, 172 MB/s 00:41:17.275 09:52:23 -- spdk/autotest.sh@105 -- # sync 00:41:17.275 09:52:23 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:41:17.275 09:52:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:41:17.275 09:52:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:41:19.186 09:52:26 -- spdk/autotest.sh@111 -- # uname -s 00:41:19.186 09:52:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:41:19.186 09:52:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:41:19.186 09:52:26 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:41:20.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:20.123 Hugepages 00:41:20.123 node hugesize free / total 00:41:20.123 node0 1048576kB 0 / 0 00:41:20.123 node0 2048kB 0 / 0 00:41:20.123 00:41:20.123 Type BDF Vendor Device NUMA Driver Device Block devices 00:41:20.123 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:41:20.123 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:41:20.123 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:41:20.382 09:52:27 -- spdk/autotest.sh@117 -- # uname -s 00:41:20.382 09:52:27 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:41:20.382 09:52:27 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:41:20.382 09:52:27 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:20.951 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:21.210 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:41:21.210 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:41:21.210 09:52:28 -- common/autotest_common.sh@1517 -- # sleep 1 00:41:22.167 09:52:29 -- common/autotest_common.sh@1518 -- # bdfs=() 00:41:22.167 09:52:29 -- common/autotest_common.sh@1518 -- # local bdfs 00:41:22.167 09:52:29 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:41:22.167 09:52:29 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:41:22.167 09:52:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:41:22.167 09:52:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:41:22.167 09:52:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:22.167 09:52:29 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:41:22.425 09:52:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:41:22.425 09:52:29 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:41:22.425 09:52:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:41:22.425 09:52:29 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:41:22.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:22.993 Waiting for block devices as requested 00:41:22.993 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:41:22.993 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:41:23.251 09:52:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:41:23.251 09:52:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:41:23.251 09:52:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:41:23.251 09:52:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:41:23.251 09:52:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:41:23.251 09:52:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:41:23.251 09:52:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:41:23.251 09:52:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:41:23.251 09:52:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:41:23.251 09:52:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:41:23.251 09:52:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:41:23.251 09:52:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:41:23.251 09:52:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:41:23.251 09:52:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:41:23.251 09:52:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:41:23.251 09:52:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:41:23.251 09:52:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:41:23.251 09:52:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:41:23.251 09:52:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:41:23.251 09:52:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:41:23.251 09:52:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:41:23.251 09:52:30 -- common/autotest_common.sh@1543 -- # continue 00:41:23.251 09:52:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:41:23.251 09:52:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:41:23.251 09:52:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:41:23.251 09:52:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:41:23.251 09:52:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:41:23.251 09:52:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:41:23.251 09:52:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:41:23.251 09:52:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:41:23.251 09:52:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:41:23.251 09:52:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:41:23.251 09:52:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:41:23.251 09:52:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:41:23.251 09:52:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:41:23.251 09:52:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:41:23.251 09:52:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:41:23.251 09:52:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:41:23.251 09:52:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:41:23.251 09:52:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:41:23.251 09:52:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:41:23.251 09:52:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:41:23.251 09:52:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:41:23.251 09:52:30 -- common/autotest_common.sh@1543 -- # continue 00:41:23.251 09:52:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:41:23.251 09:52:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:23.251 09:52:30 -- common/autotest_common.sh@10 -- # set +x 00:41:23.251 09:52:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:41:23.251 09:52:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:23.251 09:52:30 -- common/autotest_common.sh@10 -- # set +x 00:41:23.251 09:52:30 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:24.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:24.189 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:41:24.189 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:41:24.449 09:52:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:41:24.449 09:52:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:24.449 09:52:31 -- common/autotest_common.sh@10 -- # set +x 00:41:24.449 09:52:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:41:24.449 09:52:31 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:41:24.449 09:52:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:41:24.449 09:52:31 -- common/autotest_common.sh@1563 -- # bdfs=() 00:41:24.449 09:52:31 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:41:24.449 09:52:31 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:41:24.449 09:52:31 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:41:24.449 09:52:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:41:24.449 09:52:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:41:24.449 09:52:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:41:24.449 09:52:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:24.449 09:52:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:41:24.449 09:52:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:41:24.449 09:52:31 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:41:24.449 09:52:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:41:24.449 09:52:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:41:24.449 09:52:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:41:24.449 09:52:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:41:24.449 09:52:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:41:24.449 09:52:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:41:24.449 09:52:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:41:24.449 09:52:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:41:24.449 09:52:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:41:24.449 09:52:31 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:41:24.449 09:52:31 -- common/autotest_common.sh@1572 -- # return 0 00:41:24.449 09:52:31 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:41:24.449 09:52:31 -- common/autotest_common.sh@1580 -- # return 0 00:41:24.449 09:52:31 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:41:24.449 09:52:31 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:41:24.449 09:52:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:41:24.449 09:52:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:41:24.449 09:52:31 -- spdk/autotest.sh@149 -- # timing_enter lib 00:41:24.449 09:52:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:24.449 09:52:31 -- common/autotest_common.sh@10 -- # set +x 00:41:24.449 09:52:31 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:41:24.449 09:52:31 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:41:24.449 09:52:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:24.449 09:52:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:24.449 09:52:31 -- common/autotest_common.sh@10 -- # set +x 00:41:24.449 ************************************ 00:41:24.449 START TEST env 00:41:24.449 ************************************ 00:41:24.449 09:52:31 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:41:24.709 * Looking for test storage... 00:41:24.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:41:24.709 09:52:31 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:24.709 09:52:31 env -- common/autotest_common.sh@1711 -- # lcov --version 00:41:24.709 09:52:31 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:24.709 09:52:31 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:24.709 09:52:31 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:24.709 09:52:31 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:24.709 09:52:31 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:24.709 09:52:31 env -- scripts/common.sh@336 -- # IFS=.-: 00:41:24.709 09:52:31 env -- scripts/common.sh@336 -- # read -ra ver1 00:41:24.709 09:52:31 env -- scripts/common.sh@337 -- # IFS=.-: 00:41:24.709 09:52:31 env -- scripts/common.sh@337 -- # read -ra ver2 00:41:24.709 09:52:31 env -- scripts/common.sh@338 -- # local 'op=<' 00:41:24.709 09:52:31 env -- scripts/common.sh@340 -- # ver1_l=2 00:41:24.709 09:52:31 env -- scripts/common.sh@341 -- # ver2_l=1 00:41:24.709 09:52:31 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:24.709 09:52:31 env -- scripts/common.sh@344 -- # case "$op" in 00:41:24.709 09:52:31 env -- scripts/common.sh@345 -- # : 1 00:41:24.709 09:52:31 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:24.709 09:52:31 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:24.709 09:52:31 env -- scripts/common.sh@365 -- # decimal 1 00:41:24.709 09:52:31 env -- scripts/common.sh@353 -- # local d=1 00:41:24.709 09:52:31 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:24.709 09:52:31 env -- scripts/common.sh@355 -- # echo 1 00:41:24.709 09:52:31 env -- scripts/common.sh@365 -- # ver1[v]=1 00:41:24.709 09:52:31 env -- scripts/common.sh@366 -- # decimal 2 00:41:24.709 09:52:31 env -- scripts/common.sh@353 -- # local d=2 00:41:24.709 09:52:31 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:24.709 09:52:31 env -- scripts/common.sh@355 -- # echo 2 00:41:24.709 09:52:31 env -- scripts/common.sh@366 -- # ver2[v]=2 00:41:24.709 09:52:31 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:24.709 09:52:31 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:24.709 09:52:31 env -- scripts/common.sh@368 -- # return 0 00:41:24.709 09:52:31 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:24.709 09:52:31 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:24.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.709 --rc genhtml_branch_coverage=1 00:41:24.709 --rc genhtml_function_coverage=1 00:41:24.709 --rc genhtml_legend=1 00:41:24.709 --rc geninfo_all_blocks=1 00:41:24.709 --rc geninfo_unexecuted_blocks=1 00:41:24.709 00:41:24.709 ' 00:41:24.709 09:52:31 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:24.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.709 --rc genhtml_branch_coverage=1 00:41:24.709 --rc genhtml_function_coverage=1 00:41:24.709 --rc genhtml_legend=1 00:41:24.709 --rc geninfo_all_blocks=1 00:41:24.709 --rc geninfo_unexecuted_blocks=1 00:41:24.709 00:41:24.709 ' 00:41:24.709 09:52:31 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:24.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.709 --rc genhtml_branch_coverage=1 00:41:24.709 --rc genhtml_function_coverage=1 00:41:24.709 --rc genhtml_legend=1 00:41:24.709 --rc geninfo_all_blocks=1 00:41:24.709 --rc geninfo_unexecuted_blocks=1 00:41:24.709 00:41:24.709 ' 00:41:24.709 09:52:31 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:24.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.709 --rc genhtml_branch_coverage=1 00:41:24.709 --rc genhtml_function_coverage=1 00:41:24.709 --rc genhtml_legend=1 00:41:24.709 --rc geninfo_all_blocks=1 00:41:24.709 --rc geninfo_unexecuted_blocks=1 00:41:24.709 00:41:24.709 ' 00:41:24.709 09:52:31 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:41:24.709 09:52:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:24.709 09:52:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:24.709 09:52:31 env -- common/autotest_common.sh@10 -- # set +x 00:41:24.709 ************************************ 00:41:24.709 START TEST env_memory 00:41:24.709 ************************************ 00:41:24.709 09:52:31 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:41:24.709 00:41:24.709 00:41:24.709 CUnit - A unit testing framework for C - Version 2.1-3 00:41:24.709 http://cunit.sourceforge.net/ 00:41:24.709 00:41:24.709 00:41:24.709 Suite: memory 00:41:24.709 Test: alloc and free memory map ...[2024-12-09 09:52:31.720997] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:41:24.709 passed 00:41:24.709 Test: mem map translation ...[2024-12-09 09:52:31.740770] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:41:24.709 [2024-12-09 09:52:31.740892] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:41:24.709 [2024-12-09 09:52:31.740962] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:41:24.709 [2024-12-09 09:52:31.740993] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:41:24.968 passed 00:41:24.968 Test: mem map registration ...[2024-12-09 09:52:31.785126] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:41:24.968 [2024-12-09 09:52:31.785226] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:41:24.968 passed 00:41:24.968 Test: mem map adjacent registrations ...passed 00:41:24.968 00:41:24.968 Run Summary: Type Total Ran Passed Failed Inactive 00:41:24.968 suites 1 1 n/a 0 0 00:41:24.968 tests 4 4 4 0 0 00:41:24.968 asserts 152 152 152 0 n/a 00:41:24.968 00:41:24.968 Elapsed time = 0.146 seconds 00:41:24.968 00:41:24.968 real 0m0.173s 00:41:24.968 user 0m0.154s 00:41:24.968 sys 0m0.014s 00:41:24.968 ************************************ 00:41:24.968 END TEST env_memory 00:41:24.968 ************************************ 00:41:24.969 09:52:31 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:24.969 09:52:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:41:24.969 09:52:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:41:24.969 09:52:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:24.969 09:52:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:24.969 09:52:31 env -- common/autotest_common.sh@10 -- # set +x 00:41:24.969 ************************************ 00:41:24.969 START TEST env_vtophys 00:41:24.969 ************************************ 00:41:24.969 09:52:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:41:24.969 EAL: lib.eal log level changed from notice to debug 00:41:24.969 EAL: Detected lcore 0 as core 0 on socket 0 00:41:24.969 EAL: Detected lcore 1 as core 0 on socket 0 00:41:24.969 EAL: Detected lcore 2 as core 0 on socket 0 00:41:24.969 EAL: Detected lcore 3 as core 0 on socket 0 00:41:24.969 EAL: Detected lcore 4 as core 0 on socket 0 00:41:24.969 EAL: Detected lcore 5 as core 0 on socket 0 00:41:24.969 EAL: Detected lcore 6 as core 0 on socket 0 00:41:24.969 EAL: Detected lcore 7 as core 0 on socket 0 00:41:24.969 EAL: Detected lcore 8 as core 0 on socket 0 00:41:24.969 EAL: Detected lcore 9 as core 0 on socket 0 00:41:24.969 EAL: Maximum logical cores by configuration: 128 00:41:24.969 EAL: Detected CPU lcores: 10 00:41:24.969 EAL: Detected NUMA nodes: 1 00:41:24.969 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:41:24.969 EAL: Detected shared linkage of DPDK 00:41:24.969 EAL: No shared files mode enabled, IPC will be disabled 00:41:24.969 EAL: Selected IOVA mode 'PA' 00:41:24.969 EAL: Probing VFIO support... 00:41:24.969 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:41:24.969 EAL: VFIO modules not loaded, skipping VFIO support... 00:41:24.969 EAL: Ask a virtual area of 0x2e000 bytes 00:41:24.969 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:41:24.969 EAL: Setting up physically contiguous memory... 00:41:24.969 EAL: Setting maximum number of open files to 524288 00:41:24.969 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:41:24.969 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:41:24.969 EAL: Ask a virtual area of 0x61000 bytes 00:41:24.969 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:41:24.969 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:41:24.969 EAL: Ask a virtual area of 0x400000000 bytes 00:41:24.969 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:41:24.969 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:41:24.969 EAL: Ask a virtual area of 0x61000 bytes 00:41:24.969 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:41:24.969 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:41:24.969 EAL: Ask a virtual area of 0x400000000 bytes 00:41:24.969 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:41:24.969 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:41:24.969 EAL: Ask a virtual area of 0x61000 bytes 00:41:24.969 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:41:24.969 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:41:24.969 EAL: Ask a virtual area of 0x400000000 bytes 00:41:24.969 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:41:24.969 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:41:24.969 EAL: Ask a virtual area of 0x61000 bytes 00:41:24.969 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:41:24.969 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:41:24.969 EAL: Ask a virtual area of 0x400000000 bytes 00:41:24.969 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:41:24.969 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:41:24.969 EAL: Hugepages will be freed exactly as allocated. 00:41:24.969 EAL: No shared files mode enabled, IPC is disabled 00:41:24.969 EAL: No shared files mode enabled, IPC is disabled 00:41:25.228 EAL: TSC frequency is ~2290000 KHz 00:41:25.228 EAL: Main lcore 0 is ready (tid=7fc61b222a00;cpuset=[0]) 00:41:25.228 EAL: Trying to obtain current memory policy. 00:41:25.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:41:25.228 EAL: Restoring previous memory policy: 0 00:41:25.228 EAL: request: mp_malloc_sync 00:41:25.228 EAL: No shared files mode enabled, IPC is disabled 00:41:25.228 EAL: Heap on socket 0 was expanded by 2MB 00:41:25.228 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:41:25.228 EAL: No PCI address specified using 'addr=' in: bus=pci 00:41:25.228 EAL: Mem event callback 'spdk:(nil)' registered 00:41:25.228 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:41:25.228 00:41:25.228 00:41:25.228 CUnit - A unit testing framework for C - Version 2.1-3 00:41:25.228 http://cunit.sourceforge.net/ 00:41:25.228 00:41:25.228 00:41:25.228 Suite: components_suite 00:41:25.228 Test: vtophys_malloc_test ...passed 00:41:25.228 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:41:25.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:41:25.228 EAL: Restoring previous memory policy: 4 00:41:25.228 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.228 EAL: request: mp_malloc_sync 00:41:25.228 EAL: No shared files mode enabled, IPC is disabled 00:41:25.228 EAL: Heap on socket 0 was expanded by 4MB 00:41:25.228 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.228 EAL: request: mp_malloc_sync 00:41:25.228 EAL: No shared files mode enabled, IPC is disabled 00:41:25.228 EAL: Heap on socket 0 was shrunk by 4MB 00:41:25.228 EAL: Trying to obtain current memory policy. 00:41:25.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:41:25.228 EAL: Restoring previous memory policy: 4 00:41:25.228 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.228 EAL: request: mp_malloc_sync 00:41:25.228 EAL: No shared files mode enabled, IPC is disabled 00:41:25.228 EAL: Heap on socket 0 was expanded by 6MB 00:41:25.228 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.228 EAL: request: mp_malloc_sync 00:41:25.228 EAL: No shared files mode enabled, IPC is disabled 00:41:25.228 EAL: Heap on socket 0 was shrunk by 6MB 00:41:25.228 EAL: Trying to obtain current memory policy. 00:41:25.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:41:25.228 EAL: Restoring previous memory policy: 4 00:41:25.228 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.228 EAL: request: mp_malloc_sync 00:41:25.228 EAL: No shared files mode enabled, IPC is disabled 00:41:25.228 EAL: Heap on socket 0 was expanded by 10MB 00:41:25.228 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.228 EAL: request: mp_malloc_sync 00:41:25.228 EAL: No shared files mode enabled, IPC is disabled 00:41:25.228 EAL: Heap on socket 0 was shrunk by 10MB 00:41:25.228 EAL: Trying to obtain current memory policy. 00:41:25.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:41:25.228 EAL: Restoring previous memory policy: 4 00:41:25.228 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.228 EAL: request: mp_malloc_sync 00:41:25.228 EAL: No shared files mode enabled, IPC is disabled 00:41:25.228 EAL: Heap on socket 0 was expanded by 18MB 00:41:25.228 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.228 EAL: request: mp_malloc_sync 00:41:25.228 EAL: No shared files mode enabled, IPC is disabled 00:41:25.228 EAL: Heap on socket 0 was shrunk by 18MB 00:41:25.228 EAL: Trying to obtain current memory policy. 00:41:25.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:41:25.228 EAL: Restoring previous memory policy: 4 00:41:25.228 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.228 EAL: request: mp_malloc_sync 00:41:25.228 EAL: No shared files mode enabled, IPC is disabled 00:41:25.228 EAL: Heap on socket 0 was expanded by 34MB 00:41:25.228 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.228 EAL: request: mp_malloc_sync 00:41:25.228 EAL: No shared files mode enabled, IPC is disabled 00:41:25.228 EAL: Heap on socket 0 was shrunk by 34MB 00:41:25.228 EAL: Trying to obtain current memory policy. 00:41:25.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:41:25.228 EAL: Restoring previous memory policy: 4 00:41:25.228 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.229 EAL: request: mp_malloc_sync 00:41:25.229 EAL: No shared files mode enabled, IPC is disabled 00:41:25.229 EAL: Heap on socket 0 was expanded by 66MB 00:41:25.229 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.229 EAL: request: mp_malloc_sync 00:41:25.229 EAL: No shared files mode enabled, IPC is disabled 00:41:25.229 EAL: Heap on socket 0 was shrunk by 66MB 00:41:25.229 EAL: Trying to obtain current memory policy. 00:41:25.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:41:25.229 EAL: Restoring previous memory policy: 4 00:41:25.229 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.229 EAL: request: mp_malloc_sync 00:41:25.229 EAL: No shared files mode enabled, IPC is disabled 00:41:25.229 EAL: Heap on socket 0 was expanded by 130MB 00:41:25.229 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.229 EAL: request: mp_malloc_sync 00:41:25.229 EAL: No shared files mode enabled, IPC is disabled 00:41:25.229 EAL: Heap on socket 0 was shrunk by 130MB 00:41:25.229 EAL: Trying to obtain current memory policy. 00:41:25.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:41:25.229 EAL: Restoring previous memory policy: 4 00:41:25.229 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.229 EAL: request: mp_malloc_sync 00:41:25.229 EAL: No shared files mode enabled, IPC is disabled 00:41:25.229 EAL: Heap on socket 0 was expanded by 258MB 00:41:25.488 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.488 EAL: request: mp_malloc_sync 00:41:25.488 EAL: No shared files mode enabled, IPC is disabled 00:41:25.488 EAL: Heap on socket 0 was shrunk by 258MB 00:41:25.488 EAL: Trying to obtain current memory policy. 00:41:25.488 EAL: Setting policy MPOL_PREFERRED for socket 0 00:41:25.488 EAL: Restoring previous memory policy: 4 00:41:25.488 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.488 EAL: request: mp_malloc_sync 00:41:25.488 EAL: No shared files mode enabled, IPC is disabled 00:41:25.488 EAL: Heap on socket 0 was expanded by 514MB 00:41:25.488 EAL: Calling mem event callback 'spdk:(nil)' 00:41:25.747 EAL: request: mp_malloc_sync 00:41:25.747 EAL: No shared files mode enabled, IPC is disabled 00:41:25.747 EAL: Heap on socket 0 was shrunk by 514MB 00:41:25.747 EAL: Trying to obtain current memory policy. 00:41:25.747 EAL: Setting policy MPOL_PREFERRED for socket 0 00:41:26.005 EAL: Restoring previous memory policy: 4 00:41:26.005 EAL: Calling mem event callback 'spdk:(nil)' 00:41:26.005 EAL: request: mp_malloc_sync 00:41:26.005 EAL: No shared files mode enabled, IPC is disabled 00:41:26.005 EAL: Heap on socket 0 was expanded by 1026MB 00:41:26.005 EAL: Calling mem event callback 'spdk:(nil)' 00:41:26.264 passed 00:41:26.264 00:41:26.264 Run Summary: Type Total Ran Passed Failed Inactive 00:41:26.264 suites 1 1 n/a 0 0 00:41:26.264 tests 2 2 2 0 0 00:41:26.264 asserts 5421 5421 5421 0 n/a 00:41:26.264 00:41:26.264 Elapsed time = 0.994 seconds 00:41:26.264 EAL: request: mp_malloc_sync 00:41:26.264 EAL: No shared files mode enabled, IPC is disabled 00:41:26.264 EAL: Heap on socket 0 was shrunk by 1026MB 00:41:26.264 EAL: Calling mem event callback 'spdk:(nil)' 00:41:26.264 EAL: request: mp_malloc_sync 00:41:26.264 EAL: No shared files mode enabled, IPC is disabled 00:41:26.264 EAL: Heap on socket 0 was shrunk by 2MB 00:41:26.264 EAL: No shared files mode enabled, IPC is disabled 00:41:26.264 EAL: No shared files mode enabled, IPC is disabled 00:41:26.264 EAL: No shared files mode enabled, IPC is disabled 00:41:26.264 ************************************ 00:41:26.264 END TEST env_vtophys 00:41:26.264 ************************************ 00:41:26.264 00:41:26.264 real 0m1.207s 00:41:26.264 user 0m0.658s 00:41:26.264 sys 0m0.419s 00:41:26.264 09:52:33 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.264 09:52:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:41:26.264 09:52:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:41:26.264 09:52:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:26.264 09:52:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:26.264 09:52:33 env -- common/autotest_common.sh@10 -- # set +x 00:41:26.264 ************************************ 00:41:26.264 START TEST env_pci 00:41:26.264 ************************************ 00:41:26.264 09:52:33 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:41:26.264 00:41:26.264 00:41:26.264 CUnit - A unit testing framework for C - Version 2.1-3 00:41:26.264 http://cunit.sourceforge.net/ 00:41:26.264 00:41:26.264 00:41:26.264 Suite: pci 00:41:26.264 Test: pci_hook ...[2024-12-09 09:52:33.203115] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58825 has claimed it 00:41:26.264 passed 00:41:26.264 00:41:26.264 Run Summary: Type Total Ran Passed Failed Inactive 00:41:26.264 suites 1 1 n/a 0 0 00:41:26.264 tests 1 1 1 0 0 00:41:26.264 asserts 25 25 25 0 n/a 00:41:26.264 00:41:26.264 Elapsed time = 0.002 seconds 00:41:26.264 EAL: Cannot find device (10000:00:01.0) 00:41:26.264 EAL: Failed to attach device on primary process 00:41:26.264 ************************************ 00:41:26.264 END TEST env_pci 00:41:26.264 ************************************ 00:41:26.265 00:41:26.265 real 0m0.034s 00:41:26.265 user 0m0.015s 00:41:26.265 sys 0m0.018s 00:41:26.265 09:52:33 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.265 09:52:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:41:26.265 09:52:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:41:26.265 09:52:33 env -- env/env.sh@15 -- # uname 00:41:26.265 09:52:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:41:26.265 09:52:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:41:26.265 09:52:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:41:26.265 09:52:33 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:41:26.265 09:52:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:26.265 09:52:33 env -- common/autotest_common.sh@10 -- # set +x 00:41:26.265 ************************************ 00:41:26.265 START TEST env_dpdk_post_init 00:41:26.265 ************************************ 00:41:26.265 09:52:33 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:41:26.524 EAL: Detected CPU lcores: 10 00:41:26.524 EAL: Detected NUMA nodes: 1 00:41:26.524 EAL: Detected shared linkage of DPDK 00:41:26.524 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:41:26.524 EAL: Selected IOVA mode 'PA' 00:41:26.524 TELEMETRY: No legacy callbacks, legacy socket not created 00:41:26.524 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:41:26.524 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:41:26.524 Starting DPDK initialization... 00:41:26.524 Starting SPDK post initialization... 00:41:26.524 SPDK NVMe probe 00:41:26.524 Attaching to 0000:00:10.0 00:41:26.524 Attaching to 0000:00:11.0 00:41:26.524 Attached to 0000:00:10.0 00:41:26.524 Attached to 0000:00:11.0 00:41:26.524 Cleaning up... 00:41:26.524 00:41:26.524 real 0m0.196s 00:41:26.524 user 0m0.053s 00:41:26.524 sys 0m0.044s 00:41:26.524 ************************************ 00:41:26.524 END TEST env_dpdk_post_init 00:41:26.524 ************************************ 00:41:26.524 09:52:33 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.524 09:52:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:41:26.524 09:52:33 env -- env/env.sh@26 -- # uname 00:41:26.524 09:52:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:41:26.524 09:52:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:41:26.524 09:52:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:26.524 09:52:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:26.524 09:52:33 env -- common/autotest_common.sh@10 -- # set +x 00:41:26.524 ************************************ 00:41:26.524 START TEST env_mem_callbacks 00:41:26.524 ************************************ 00:41:26.524 09:52:33 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:41:26.784 EAL: Detected CPU lcores: 10 00:41:26.784 EAL: Detected NUMA nodes: 1 00:41:26.784 EAL: Detected shared linkage of DPDK 00:41:26.784 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:41:26.784 EAL: Selected IOVA mode 'PA' 00:41:26.784 TELEMETRY: No legacy callbacks, legacy socket not created 00:41:26.784 00:41:26.784 00:41:26.784 CUnit - A unit testing framework for C - Version 2.1-3 00:41:26.784 http://cunit.sourceforge.net/ 00:41:26.784 00:41:26.784 00:41:26.784 Suite: memory 00:41:26.784 Test: test ... 00:41:26.784 register 0x200000200000 2097152 00:41:26.784 malloc 3145728 00:41:26.784 register 0x200000400000 4194304 00:41:26.784 buf 0x200000500000 len 3145728 PASSED 00:41:26.784 malloc 64 00:41:26.784 buf 0x2000004fff40 len 64 PASSED 00:41:26.784 malloc 4194304 00:41:26.784 register 0x200000800000 6291456 00:41:26.784 buf 0x200000a00000 len 4194304 PASSED 00:41:26.784 free 0x200000500000 3145728 00:41:26.784 free 0x2000004fff40 64 00:41:26.784 unregister 0x200000400000 4194304 PASSED 00:41:26.784 free 0x200000a00000 4194304 00:41:26.784 unregister 0x200000800000 6291456 PASSED 00:41:26.784 malloc 8388608 00:41:26.784 register 0x200000400000 10485760 00:41:26.784 buf 0x200000600000 len 8388608 PASSED 00:41:26.784 free 0x200000600000 8388608 00:41:26.784 unregister 0x200000400000 10485760 PASSED 00:41:26.784 passed 00:41:26.784 00:41:26.784 Run Summary: Type Total Ran Passed Failed Inactive 00:41:26.784 suites 1 1 n/a 0 0 00:41:26.784 tests 1 1 1 0 0 00:41:26.784 asserts 15 15 15 0 n/a 00:41:26.784 00:41:26.784 Elapsed time = 0.010 seconds 00:41:26.784 00:41:26.784 real 0m0.150s 00:41:26.784 user 0m0.019s 00:41:26.784 sys 0m0.028s 00:41:26.784 09:52:33 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.784 09:52:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:41:26.784 ************************************ 00:41:26.784 END TEST env_mem_callbacks 00:41:26.784 ************************************ 00:41:26.784 00:41:26.784 real 0m2.335s 00:41:26.784 user 0m1.134s 00:41:26.784 sys 0m0.884s 00:41:26.784 09:52:33 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.784 09:52:33 env -- common/autotest_common.sh@10 -- # set +x 00:41:26.784 ************************************ 00:41:26.784 END TEST env 00:41:26.784 ************************************ 00:41:26.784 09:52:33 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:41:26.784 09:52:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:26.784 09:52:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:26.784 09:52:33 -- common/autotest_common.sh@10 -- # set +x 00:41:26.784 ************************************ 00:41:26.784 START TEST rpc 00:41:26.784 ************************************ 00:41:26.784 09:52:33 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:41:27.044 * Looking for test storage... 00:41:27.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:41:27.044 09:52:33 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:27.044 09:52:33 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:41:27.044 09:52:33 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:27.044 09:52:34 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:27.044 09:52:34 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:27.044 09:52:34 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:27.044 09:52:34 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:27.044 09:52:34 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:41:27.044 09:52:34 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:41:27.044 09:52:34 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:41:27.044 09:52:34 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:41:27.044 09:52:34 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:41:27.044 09:52:34 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:41:27.044 09:52:34 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:41:27.044 09:52:34 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:27.044 09:52:34 rpc -- scripts/common.sh@344 -- # case "$op" in 00:41:27.044 09:52:34 rpc -- scripts/common.sh@345 -- # : 1 00:41:27.044 09:52:34 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:27.044 09:52:34 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:27.044 09:52:34 rpc -- scripts/common.sh@365 -- # decimal 1 00:41:27.044 09:52:34 rpc -- scripts/common.sh@353 -- # local d=1 00:41:27.044 09:52:34 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:27.044 09:52:34 rpc -- scripts/common.sh@355 -- # echo 1 00:41:27.044 09:52:34 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:41:27.044 09:52:34 rpc -- scripts/common.sh@366 -- # decimal 2 00:41:27.044 09:52:34 rpc -- scripts/common.sh@353 -- # local d=2 00:41:27.044 09:52:34 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:27.044 09:52:34 rpc -- scripts/common.sh@355 -- # echo 2 00:41:27.044 09:52:34 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:41:27.044 09:52:34 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:27.044 09:52:34 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:27.044 09:52:34 rpc -- scripts/common.sh@368 -- # return 0 00:41:27.044 09:52:34 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:27.044 09:52:34 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:27.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:27.044 --rc genhtml_branch_coverage=1 00:41:27.044 --rc genhtml_function_coverage=1 00:41:27.044 --rc genhtml_legend=1 00:41:27.044 --rc geninfo_all_blocks=1 00:41:27.044 --rc geninfo_unexecuted_blocks=1 00:41:27.044 00:41:27.044 ' 00:41:27.044 09:52:34 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:27.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:27.044 --rc genhtml_branch_coverage=1 00:41:27.044 --rc genhtml_function_coverage=1 00:41:27.044 --rc genhtml_legend=1 00:41:27.044 --rc geninfo_all_blocks=1 00:41:27.044 --rc geninfo_unexecuted_blocks=1 00:41:27.044 00:41:27.044 ' 00:41:27.044 09:52:34 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:27.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:27.044 --rc genhtml_branch_coverage=1 00:41:27.044 --rc genhtml_function_coverage=1 00:41:27.044 --rc genhtml_legend=1 00:41:27.044 --rc geninfo_all_blocks=1 00:41:27.044 --rc geninfo_unexecuted_blocks=1 00:41:27.044 00:41:27.044 ' 00:41:27.044 09:52:34 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:27.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:27.044 --rc genhtml_branch_coverage=1 00:41:27.044 --rc genhtml_function_coverage=1 00:41:27.044 --rc genhtml_legend=1 00:41:27.044 --rc geninfo_all_blocks=1 00:41:27.044 --rc geninfo_unexecuted_blocks=1 00:41:27.044 00:41:27.044 ' 00:41:27.044 09:52:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58948 00:41:27.044 09:52:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:41:27.044 09:52:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:41:27.044 09:52:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58948 00:41:27.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:27.044 09:52:34 rpc -- common/autotest_common.sh@835 -- # '[' -z 58948 ']' 00:41:27.044 09:52:34 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:27.044 09:52:34 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:27.044 09:52:34 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:27.044 09:52:34 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:27.044 09:52:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:41:27.303 [2024-12-09 09:52:34.127470] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:27.303 [2024-12-09 09:52:34.127571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58948 ] 00:41:27.303 [2024-12-09 09:52:34.270175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.303 [2024-12-09 09:52:34.326262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:41:27.303 [2024-12-09 09:52:34.326384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58948' to capture a snapshot of events at runtime. 00:41:27.303 [2024-12-09 09:52:34.326430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:27.303 [2024-12-09 09:52:34.326463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:27.303 [2024-12-09 09:52:34.326501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58948 for offline analysis/debug. 00:41:27.303 [2024-12-09 09:52:34.326924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:28.234 09:52:35 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:28.234 09:52:35 rpc -- common/autotest_common.sh@868 -- # return 0 00:41:28.234 09:52:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:41:28.234 09:52:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:41:28.234 09:52:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:41:28.234 09:52:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:41:28.234 09:52:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:28.234 09:52:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:28.234 09:52:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:41:28.234 ************************************ 00:41:28.234 START TEST rpc_integrity 00:41:28.234 ************************************ 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:41:28.234 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.234 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:41:28.234 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:41:28.234 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:41:28.234 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.234 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:41:28.234 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.234 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:41:28.234 { 00:41:28.234 "aliases": [ 00:41:28.234 "b0a92bc3-b2b5-4b80-83e7-230e7a8f5c36" 00:41:28.234 ], 00:41:28.234 "assigned_rate_limits": { 00:41:28.234 "r_mbytes_per_sec": 0, 00:41:28.234 "rw_ios_per_sec": 0, 00:41:28.234 "rw_mbytes_per_sec": 0, 00:41:28.234 "w_mbytes_per_sec": 0 00:41:28.234 }, 00:41:28.234 "block_size": 512, 00:41:28.234 "claimed": false, 00:41:28.234 "driver_specific": {}, 00:41:28.234 "memory_domains": [ 00:41:28.234 { 00:41:28.234 "dma_device_id": "system", 00:41:28.234 "dma_device_type": 1 00:41:28.234 }, 00:41:28.234 { 00:41:28.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:28.234 "dma_device_type": 2 00:41:28.234 } 00:41:28.234 ], 00:41:28.234 "name": "Malloc0", 00:41:28.234 "num_blocks": 16384, 00:41:28.234 "product_name": "Malloc disk", 00:41:28.234 "supported_io_types": { 00:41:28.234 "abort": true, 00:41:28.234 "compare": false, 00:41:28.234 "compare_and_write": false, 00:41:28.234 "copy": true, 00:41:28.234 "flush": true, 00:41:28.234 "get_zone_info": false, 00:41:28.234 "nvme_admin": false, 00:41:28.234 "nvme_io": false, 00:41:28.234 "nvme_io_md": false, 00:41:28.234 "nvme_iov_md": false, 00:41:28.234 "read": true, 00:41:28.234 "reset": true, 00:41:28.234 "seek_data": false, 00:41:28.234 "seek_hole": false, 00:41:28.234 "unmap": true, 00:41:28.234 "write": true, 00:41:28.234 "write_zeroes": true, 00:41:28.234 "zcopy": true, 00:41:28.234 "zone_append": false, 00:41:28.234 "zone_management": false 00:41:28.234 }, 00:41:28.234 "uuid": "b0a92bc3-b2b5-4b80-83e7-230e7a8f5c36", 00:41:28.234 "zoned": false 00:41:28.234 } 00:41:28.234 ]' 00:41:28.234 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:41:28.234 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:41:28.234 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:28.234 [2024-12-09 09:52:35.267359] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:41:28.234 [2024-12-09 09:52:35.267432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:28.234 [2024-12-09 09:52:35.267448] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17563e0 00:41:28.234 [2024-12-09 09:52:35.267454] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:28.234 [2024-12-09 09:52:35.268946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:28.234 [2024-12-09 09:52:35.269057] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:41:28.234 Passthru0 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.234 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.234 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:28.503 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.503 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:41:28.503 { 00:41:28.503 "aliases": [ 00:41:28.503 "b0a92bc3-b2b5-4b80-83e7-230e7a8f5c36" 00:41:28.503 ], 00:41:28.503 "assigned_rate_limits": { 00:41:28.503 "r_mbytes_per_sec": 0, 00:41:28.503 "rw_ios_per_sec": 0, 00:41:28.503 "rw_mbytes_per_sec": 0, 00:41:28.503 "w_mbytes_per_sec": 0 00:41:28.503 }, 00:41:28.503 "block_size": 512, 00:41:28.503 "claim_type": "exclusive_write", 00:41:28.503 "claimed": true, 00:41:28.503 "driver_specific": {}, 00:41:28.503 "memory_domains": [ 00:41:28.503 { 00:41:28.503 "dma_device_id": "system", 00:41:28.503 "dma_device_type": 1 00:41:28.503 }, 00:41:28.503 { 00:41:28.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:28.503 "dma_device_type": 2 00:41:28.503 } 00:41:28.503 ], 00:41:28.503 "name": "Malloc0", 00:41:28.503 "num_blocks": 16384, 00:41:28.503 "product_name": "Malloc disk", 00:41:28.503 "supported_io_types": { 00:41:28.503 "abort": true, 00:41:28.503 "compare": false, 00:41:28.503 "compare_and_write": false, 00:41:28.503 "copy": true, 00:41:28.503 "flush": true, 00:41:28.503 "get_zone_info": false, 00:41:28.503 "nvme_admin": false, 00:41:28.503 "nvme_io": false, 00:41:28.503 "nvme_io_md": false, 00:41:28.503 "nvme_iov_md": false, 00:41:28.503 "read": true, 00:41:28.503 "reset": true, 00:41:28.503 "seek_data": false, 00:41:28.503 "seek_hole": false, 00:41:28.503 "unmap": true, 00:41:28.503 "write": true, 00:41:28.503 "write_zeroes": true, 00:41:28.503 "zcopy": true, 00:41:28.503 "zone_append": false, 00:41:28.503 "zone_management": false 00:41:28.503 }, 00:41:28.503 "uuid": "b0a92bc3-b2b5-4b80-83e7-230e7a8f5c36", 00:41:28.503 "zoned": false 00:41:28.503 }, 00:41:28.503 { 00:41:28.503 "aliases": [ 00:41:28.503 "13c50484-3cf0-52e0-a33b-9960b9d68b2e" 00:41:28.503 ], 00:41:28.504 "assigned_rate_limits": { 00:41:28.504 "r_mbytes_per_sec": 0, 00:41:28.504 "rw_ios_per_sec": 0, 00:41:28.504 "rw_mbytes_per_sec": 0, 00:41:28.504 "w_mbytes_per_sec": 0 00:41:28.504 }, 00:41:28.504 "block_size": 512, 00:41:28.504 "claimed": false, 00:41:28.504 "driver_specific": { 00:41:28.504 "passthru": { 00:41:28.504 "base_bdev_name": "Malloc0", 00:41:28.504 "name": "Passthru0" 00:41:28.504 } 00:41:28.504 }, 00:41:28.504 "memory_domains": [ 00:41:28.504 { 00:41:28.504 "dma_device_id": "system", 00:41:28.504 "dma_device_type": 1 00:41:28.504 }, 00:41:28.504 { 00:41:28.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:28.504 "dma_device_type": 2 00:41:28.504 } 00:41:28.504 ], 00:41:28.504 "name": "Passthru0", 00:41:28.504 "num_blocks": 16384, 00:41:28.504 "product_name": "passthru", 00:41:28.504 "supported_io_types": { 00:41:28.504 "abort": true, 00:41:28.504 "compare": false, 00:41:28.504 "compare_and_write": false, 00:41:28.504 "copy": true, 00:41:28.504 "flush": true, 00:41:28.504 "get_zone_info": false, 00:41:28.504 "nvme_admin": false, 00:41:28.504 "nvme_io": false, 00:41:28.504 "nvme_io_md": false, 00:41:28.504 "nvme_iov_md": false, 00:41:28.504 "read": true, 00:41:28.504 "reset": true, 00:41:28.504 "seek_data": false, 00:41:28.504 "seek_hole": false, 00:41:28.504 "unmap": true, 00:41:28.504 "write": true, 00:41:28.504 "write_zeroes": true, 00:41:28.504 "zcopy": true, 00:41:28.504 "zone_append": false, 00:41:28.504 "zone_management": false 00:41:28.504 }, 00:41:28.504 "uuid": "13c50484-3cf0-52e0-a33b-9960b9d68b2e", 00:41:28.504 "zoned": false 00:41:28.504 } 00:41:28.504 ]' 00:41:28.504 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:41:28.504 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:41:28.504 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:41:28.504 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.504 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:28.504 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.504 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:41:28.504 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.504 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:28.504 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.504 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:41:28.504 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.504 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:28.504 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.504 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:41:28.504 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:41:28.504 ************************************ 00:41:28.504 END TEST rpc_integrity 00:41:28.504 ************************************ 00:41:28.504 09:52:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:41:28.504 00:41:28.504 real 0m0.326s 00:41:28.504 user 0m0.194s 00:41:28.504 sys 0m0.052s 00:41:28.504 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:28.504 09:52:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:28.504 09:52:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:41:28.504 09:52:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:28.504 09:52:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:28.504 09:52:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:41:28.504 ************************************ 00:41:28.504 START TEST rpc_plugins 00:41:28.504 ************************************ 00:41:28.504 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:41:28.504 09:52:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:41:28.504 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.504 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:41:28.504 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.504 09:52:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:41:28.504 09:52:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:41:28.504 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.504 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:41:28.504 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.504 09:52:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:41:28.504 { 00:41:28.504 "aliases": [ 00:41:28.504 "4eb68261-9b69-492a-996a-b40e7e111a93" 00:41:28.504 ], 00:41:28.504 "assigned_rate_limits": { 00:41:28.504 "r_mbytes_per_sec": 0, 00:41:28.504 "rw_ios_per_sec": 0, 00:41:28.504 "rw_mbytes_per_sec": 0, 00:41:28.504 "w_mbytes_per_sec": 0 00:41:28.504 }, 00:41:28.504 "block_size": 4096, 00:41:28.504 "claimed": false, 00:41:28.504 "driver_specific": {}, 00:41:28.504 "memory_domains": [ 00:41:28.504 { 00:41:28.504 "dma_device_id": "system", 00:41:28.504 "dma_device_type": 1 00:41:28.504 }, 00:41:28.504 { 00:41:28.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:28.504 "dma_device_type": 2 00:41:28.504 } 00:41:28.504 ], 00:41:28.504 "name": "Malloc1", 00:41:28.504 "num_blocks": 256, 00:41:28.504 "product_name": "Malloc disk", 00:41:28.504 "supported_io_types": { 00:41:28.504 "abort": true, 00:41:28.504 "compare": false, 00:41:28.504 "compare_and_write": false, 00:41:28.504 "copy": true, 00:41:28.504 "flush": true, 00:41:28.504 "get_zone_info": false, 00:41:28.504 "nvme_admin": false, 00:41:28.504 "nvme_io": false, 00:41:28.504 "nvme_io_md": false, 00:41:28.504 "nvme_iov_md": false, 00:41:28.504 "read": true, 00:41:28.504 "reset": true, 00:41:28.504 "seek_data": false, 00:41:28.504 "seek_hole": false, 00:41:28.504 "unmap": true, 00:41:28.504 "write": true, 00:41:28.504 "write_zeroes": true, 00:41:28.504 "zcopy": true, 00:41:28.504 "zone_append": false, 00:41:28.504 "zone_management": false 00:41:28.504 }, 00:41:28.504 "uuid": "4eb68261-9b69-492a-996a-b40e7e111a93", 00:41:28.504 "zoned": false 00:41:28.504 } 00:41:28.504 ]' 00:41:28.504 09:52:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:41:28.763 09:52:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:41:28.763 09:52:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:41:28.763 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.763 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:41:28.763 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.763 09:52:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:41:28.763 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.763 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:41:28.763 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.763 09:52:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:41:28.763 09:52:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:41:28.763 ************************************ 00:41:28.763 END TEST rpc_plugins 00:41:28.763 ************************************ 00:41:28.763 09:52:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:41:28.763 00:41:28.763 real 0m0.156s 00:41:28.763 user 0m0.092s 00:41:28.763 sys 0m0.024s 00:41:28.763 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:28.763 09:52:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:41:28.763 09:52:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:41:28.763 09:52:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:28.763 09:52:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:28.763 09:52:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:41:28.763 ************************************ 00:41:28.763 START TEST rpc_trace_cmd_test 00:41:28.763 ************************************ 00:41:28.763 09:52:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:41:28.763 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:41:28.763 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:41:28.763 09:52:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.763 09:52:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:41:28.763 09:52:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.763 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:41:28.763 "bdev": { 00:41:28.763 "mask": "0x8", 00:41:28.763 "tpoint_mask": "0xffffffffffffffff" 00:41:28.763 }, 00:41:28.763 "bdev_nvme": { 00:41:28.763 "mask": "0x4000", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "bdev_raid": { 00:41:28.763 "mask": "0x20000", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "blob": { 00:41:28.763 "mask": "0x10000", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "blobfs": { 00:41:28.763 "mask": "0x80", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "dsa": { 00:41:28.763 "mask": "0x200", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "ftl": { 00:41:28.763 "mask": "0x40", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "iaa": { 00:41:28.763 "mask": "0x1000", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "iscsi_conn": { 00:41:28.763 "mask": "0x2", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "nvme_pcie": { 00:41:28.763 "mask": "0x800", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "nvme_tcp": { 00:41:28.763 "mask": "0x2000", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "nvmf_rdma": { 00:41:28.763 "mask": "0x10", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "nvmf_tcp": { 00:41:28.763 "mask": "0x20", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "scheduler": { 00:41:28.763 "mask": "0x40000", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "scsi": { 00:41:28.763 "mask": "0x4", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "sock": { 00:41:28.763 "mask": "0x8000", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "thread": { 00:41:28.763 "mask": "0x400", 00:41:28.763 "tpoint_mask": "0x0" 00:41:28.763 }, 00:41:28.763 "tpoint_group_mask": "0x8", 00:41:28.763 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58948" 00:41:28.763 }' 00:41:28.763 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:41:28.763 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:41:28.763 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:41:29.022 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:41:29.022 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:41:29.022 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:41:29.022 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:41:29.022 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:41:29.023 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:41:29.023 ************************************ 00:41:29.023 END TEST rpc_trace_cmd_test 00:41:29.023 ************************************ 00:41:29.023 09:52:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:41:29.023 00:41:29.023 real 0m0.259s 00:41:29.023 user 0m0.212s 00:41:29.023 sys 0m0.036s 00:41:29.023 09:52:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:29.023 09:52:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:41:29.023 09:52:36 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:41:29.023 09:52:36 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:41:29.023 09:52:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:29.023 09:52:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:29.023 09:52:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:41:29.023 ************************************ 00:41:29.023 START TEST go_rpc 00:41:29.023 ************************************ 00:41:29.023 09:52:36 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:41:29.023 09:52:36 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:41:29.023 09:52:36 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:41:29.023 09:52:36 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:41:29.282 09:52:36 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:41:29.282 09:52:36 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:41:29.282 09:52:36 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.282 09:52:36 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:29.282 09:52:36 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.282 09:52:36 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:41:29.282 09:52:36 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:41:29.282 09:52:36 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["030a5d26-6078-46a4-8697-00f968cfd09d"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"030a5d26-6078-46a4-8697-00f968cfd09d","zoned":false}]' 00:41:29.282 09:52:36 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:41:29.282 09:52:36 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:41:29.282 09:52:36 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:41:29.282 09:52:36 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.282 09:52:36 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:29.282 09:52:36 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.282 09:52:36 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:41:29.282 09:52:36 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:41:29.282 09:52:36 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:41:29.282 ************************************ 00:41:29.282 END TEST go_rpc 00:41:29.282 ************************************ 00:41:29.282 09:52:36 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:41:29.282 00:41:29.282 real 0m0.233s 00:41:29.282 user 0m0.147s 00:41:29.282 sys 0m0.049s 00:41:29.282 09:52:36 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:29.282 09:52:36 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:29.282 09:52:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:41:29.282 09:52:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:41:29.282 09:52:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:29.282 09:52:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:29.282 09:52:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:41:29.541 ************************************ 00:41:29.541 START TEST rpc_daemon_integrity 00:41:29.541 ************************************ 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.541 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:41:29.541 { 00:41:29.541 "aliases": [ 00:41:29.541 "d570fbe5-507b-4c43-8c1b-7adbe2ca4bba" 00:41:29.541 ], 00:41:29.541 "assigned_rate_limits": { 00:41:29.541 "r_mbytes_per_sec": 0, 00:41:29.541 "rw_ios_per_sec": 0, 00:41:29.541 "rw_mbytes_per_sec": 0, 00:41:29.541 "w_mbytes_per_sec": 0 00:41:29.542 }, 00:41:29.542 "block_size": 512, 00:41:29.542 "claimed": false, 00:41:29.542 "driver_specific": {}, 00:41:29.542 "memory_domains": [ 00:41:29.542 { 00:41:29.542 "dma_device_id": "system", 00:41:29.542 "dma_device_type": 1 00:41:29.542 }, 00:41:29.542 { 00:41:29.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:29.542 "dma_device_type": 2 00:41:29.542 } 00:41:29.542 ], 00:41:29.542 "name": "Malloc3", 00:41:29.542 "num_blocks": 16384, 00:41:29.542 "product_name": "Malloc disk", 00:41:29.542 "supported_io_types": { 00:41:29.542 "abort": true, 00:41:29.542 "compare": false, 00:41:29.542 "compare_and_write": false, 00:41:29.542 "copy": true, 00:41:29.542 "flush": true, 00:41:29.542 "get_zone_info": false, 00:41:29.542 "nvme_admin": false, 00:41:29.542 "nvme_io": false, 00:41:29.542 "nvme_io_md": false, 00:41:29.542 "nvme_iov_md": false, 00:41:29.542 "read": true, 00:41:29.542 "reset": true, 00:41:29.542 "seek_data": false, 00:41:29.542 "seek_hole": false, 00:41:29.542 "unmap": true, 00:41:29.542 "write": true, 00:41:29.542 "write_zeroes": true, 00:41:29.542 "zcopy": true, 00:41:29.542 "zone_append": false, 00:41:29.542 "zone_management": false 00:41:29.542 }, 00:41:29.542 "uuid": "d570fbe5-507b-4c43-8c1b-7adbe2ca4bba", 00:41:29.542 "zoned": false 00:41:29.542 } 00:41:29.542 ]' 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:29.542 [2024-12-09 09:52:36.493435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:41:29.542 [2024-12-09 09:52:36.493492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:29.542 [2024-12-09 09:52:36.493508] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1743be0 00:41:29.542 [2024-12-09 09:52:36.493515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:29.542 [2024-12-09 09:52:36.494976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:29.542 [2024-12-09 09:52:36.495009] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:41:29.542 Passthru0 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:41:29.542 { 00:41:29.542 "aliases": [ 00:41:29.542 "d570fbe5-507b-4c43-8c1b-7adbe2ca4bba" 00:41:29.542 ], 00:41:29.542 "assigned_rate_limits": { 00:41:29.542 "r_mbytes_per_sec": 0, 00:41:29.542 "rw_ios_per_sec": 0, 00:41:29.542 "rw_mbytes_per_sec": 0, 00:41:29.542 "w_mbytes_per_sec": 0 00:41:29.542 }, 00:41:29.542 "block_size": 512, 00:41:29.542 "claim_type": "exclusive_write", 00:41:29.542 "claimed": true, 00:41:29.542 "driver_specific": {}, 00:41:29.542 "memory_domains": [ 00:41:29.542 { 00:41:29.542 "dma_device_id": "system", 00:41:29.542 "dma_device_type": 1 00:41:29.542 }, 00:41:29.542 { 00:41:29.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:29.542 "dma_device_type": 2 00:41:29.542 } 00:41:29.542 ], 00:41:29.542 "name": "Malloc3", 00:41:29.542 "num_blocks": 16384, 00:41:29.542 "product_name": "Malloc disk", 00:41:29.542 "supported_io_types": { 00:41:29.542 "abort": true, 00:41:29.542 "compare": false, 00:41:29.542 "compare_and_write": false, 00:41:29.542 "copy": true, 00:41:29.542 "flush": true, 00:41:29.542 "get_zone_info": false, 00:41:29.542 "nvme_admin": false, 00:41:29.542 "nvme_io": false, 00:41:29.542 "nvme_io_md": false, 00:41:29.542 "nvme_iov_md": false, 00:41:29.542 "read": true, 00:41:29.542 "reset": true, 00:41:29.542 "seek_data": false, 00:41:29.542 "seek_hole": false, 00:41:29.542 "unmap": true, 00:41:29.542 "write": true, 00:41:29.542 "write_zeroes": true, 00:41:29.542 "zcopy": true, 00:41:29.542 "zone_append": false, 00:41:29.542 "zone_management": false 00:41:29.542 }, 00:41:29.542 "uuid": "d570fbe5-507b-4c43-8c1b-7adbe2ca4bba", 00:41:29.542 "zoned": false 00:41:29.542 }, 00:41:29.542 { 00:41:29.542 "aliases": [ 00:41:29.542 "92db7b8b-c332-5a0e-a22d-5ef0091cc38f" 00:41:29.542 ], 00:41:29.542 "assigned_rate_limits": { 00:41:29.542 "r_mbytes_per_sec": 0, 00:41:29.542 "rw_ios_per_sec": 0, 00:41:29.542 "rw_mbytes_per_sec": 0, 00:41:29.542 "w_mbytes_per_sec": 0 00:41:29.542 }, 00:41:29.542 "block_size": 512, 00:41:29.542 "claimed": false, 00:41:29.542 "driver_specific": { 00:41:29.542 "passthru": { 00:41:29.542 "base_bdev_name": "Malloc3", 00:41:29.542 "name": "Passthru0" 00:41:29.542 } 00:41:29.542 }, 00:41:29.542 "memory_domains": [ 00:41:29.542 { 00:41:29.542 "dma_device_id": "system", 00:41:29.542 "dma_device_type": 1 00:41:29.542 }, 00:41:29.542 { 00:41:29.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:29.542 "dma_device_type": 2 00:41:29.542 } 00:41:29.542 ], 00:41:29.542 "name": "Passthru0", 00:41:29.542 "num_blocks": 16384, 00:41:29.542 "product_name": "passthru", 00:41:29.542 "supported_io_types": { 00:41:29.542 "abort": true, 00:41:29.542 "compare": false, 00:41:29.542 "compare_and_write": false, 00:41:29.542 "copy": true, 00:41:29.542 "flush": true, 00:41:29.542 "get_zone_info": false, 00:41:29.542 "nvme_admin": false, 00:41:29.542 "nvme_io": false, 00:41:29.542 "nvme_io_md": false, 00:41:29.542 "nvme_iov_md": false, 00:41:29.542 "read": true, 00:41:29.542 "reset": true, 00:41:29.542 "seek_data": false, 00:41:29.542 "seek_hole": false, 00:41:29.542 "unmap": true, 00:41:29.542 "write": true, 00:41:29.542 "write_zeroes": true, 00:41:29.542 "zcopy": true, 00:41:29.542 "zone_append": false, 00:41:29.542 "zone_management": false 00:41:29.542 }, 00:41:29.542 "uuid": "92db7b8b-c332-5a0e-a22d-5ef0091cc38f", 00:41:29.542 "zoned": false 00:41:29.542 } 00:41:29.542 ]' 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.542 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:41:29.802 00:41:29.802 real 0m0.323s 00:41:29.802 user 0m0.196s 00:41:29.802 sys 0m0.045s 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:29.802 09:52:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:41:29.802 ************************************ 00:41:29.802 END TEST rpc_daemon_integrity 00:41:29.802 ************************************ 00:41:29.802 09:52:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:41:29.802 09:52:36 rpc -- rpc/rpc.sh@84 -- # killprocess 58948 00:41:29.802 09:52:36 rpc -- common/autotest_common.sh@954 -- # '[' -z 58948 ']' 00:41:29.802 09:52:36 rpc -- common/autotest_common.sh@958 -- # kill -0 58948 00:41:29.802 09:52:36 rpc -- common/autotest_common.sh@959 -- # uname 00:41:29.802 09:52:36 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:29.802 09:52:36 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58948 00:41:29.802 killing process with pid 58948 00:41:29.802 09:52:36 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:29.802 09:52:36 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:29.802 09:52:36 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58948' 00:41:29.802 09:52:36 rpc -- common/autotest_common.sh@973 -- # kill 58948 00:41:29.802 09:52:36 rpc -- common/autotest_common.sh@978 -- # wait 58948 00:41:30.061 00:41:30.061 real 0m3.234s 00:41:30.061 user 0m4.150s 00:41:30.061 sys 0m0.907s 00:41:30.061 09:52:37 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:30.061 09:52:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:41:30.061 ************************************ 00:41:30.061 END TEST rpc 00:41:30.061 ************************************ 00:41:30.320 09:52:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:41:30.320 09:52:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:30.320 09:52:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:30.320 09:52:37 -- common/autotest_common.sh@10 -- # set +x 00:41:30.320 ************************************ 00:41:30.320 START TEST skip_rpc 00:41:30.320 ************************************ 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:41:30.320 * Looking for test storage... 00:41:30.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:30.320 09:52:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:30.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.320 --rc genhtml_branch_coverage=1 00:41:30.320 --rc genhtml_function_coverage=1 00:41:30.320 --rc genhtml_legend=1 00:41:30.320 --rc geninfo_all_blocks=1 00:41:30.320 --rc geninfo_unexecuted_blocks=1 00:41:30.320 00:41:30.320 ' 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:30.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.320 --rc genhtml_branch_coverage=1 00:41:30.320 --rc genhtml_function_coverage=1 00:41:30.320 --rc genhtml_legend=1 00:41:30.320 --rc geninfo_all_blocks=1 00:41:30.320 --rc geninfo_unexecuted_blocks=1 00:41:30.320 00:41:30.320 ' 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:30.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.320 --rc genhtml_branch_coverage=1 00:41:30.320 --rc genhtml_function_coverage=1 00:41:30.320 --rc genhtml_legend=1 00:41:30.320 --rc geninfo_all_blocks=1 00:41:30.320 --rc geninfo_unexecuted_blocks=1 00:41:30.320 00:41:30.320 ' 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:30.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.320 --rc genhtml_branch_coverage=1 00:41:30.320 --rc genhtml_function_coverage=1 00:41:30.320 --rc genhtml_legend=1 00:41:30.320 --rc geninfo_all_blocks=1 00:41:30.320 --rc geninfo_unexecuted_blocks=1 00:41:30.320 00:41:30.320 ' 00:41:30.320 09:52:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:41:30.320 09:52:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:41:30.320 09:52:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:30.320 09:52:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:30.320 ************************************ 00:41:30.320 START TEST skip_rpc 00:41:30.320 ************************************ 00:41:30.320 09:52:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:41:30.579 09:52:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59217 00:41:30.579 09:52:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:41:30.579 09:52:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:41:30.579 09:52:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:41:30.579 [2024-12-09 09:52:37.422990] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:30.579 [2024-12-09 09:52:37.423170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59217 ] 00:41:30.579 [2024-12-09 09:52:37.571443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:30.579 [2024-12-09 09:52:37.618222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:35.942 2024/12/09 09:52:42 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59217 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59217 ']' 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59217 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59217 00:41:35.942 killing process with pid 59217 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59217' 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59217 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59217 00:41:35.942 ************************************ 00:41:35.942 END TEST skip_rpc 00:41:35.942 ************************************ 00:41:35.942 00:41:35.942 real 0m5.377s 00:41:35.942 user 0m5.058s 00:41:35.942 sys 0m0.243s 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:35.942 09:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:35.942 09:52:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:41:35.942 09:52:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:35.942 09:52:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:35.942 09:52:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:35.942 ************************************ 00:41:35.942 START TEST skip_rpc_with_json 00:41:35.942 ************************************ 00:41:35.942 09:52:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:41:35.942 09:52:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:41:35.942 09:52:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59310 00:41:35.942 09:52:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:41:35.942 09:52:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:41:35.942 09:52:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59310 00:41:35.942 09:52:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59310 ']' 00:41:35.942 09:52:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:35.942 09:52:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:35.942 09:52:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:35.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:35.942 09:52:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:35.942 09:52:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:41:35.942 [2024-12-09 09:52:42.839400] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:35.942 [2024-12-09 09:52:42.839495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59310 ] 00:41:36.201 [2024-12-09 09:52:42.985303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:36.201 [2024-12-09 09:52:43.029665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:36.768 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:36.768 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:41:36.768 09:52:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:41:36.768 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.769 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:41:36.769 [2024-12-09 09:52:43.772137] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:41:36.769 2024/12/09 09:52:43 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:41:36.769 request: 00:41:36.769 { 00:41:36.769 "method": "nvmf_get_transports", 00:41:36.769 "params": { 00:41:36.769 "trtype": "tcp" 00:41:36.769 } 00:41:36.769 } 00:41:36.769 Got JSON-RPC error response 00:41:36.769 GoRPCClient: error on JSON-RPC call 00:41:36.769 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:36.769 09:52:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:41:36.769 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.769 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:41:36.769 [2024-12-09 09:52:43.780894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:36.769 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.769 09:52:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:41:36.769 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.769 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:41:37.028 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.028 09:52:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:41:37.028 { 00:41:37.028 "subsystems": [ 00:41:37.028 { 00:41:37.028 "subsystem": "fsdev", 00:41:37.028 "config": [ 00:41:37.028 { 00:41:37.028 "method": "fsdev_set_opts", 00:41:37.028 "params": { 00:41:37.028 "fsdev_io_cache_size": 256, 00:41:37.028 "fsdev_io_pool_size": 65535 00:41:37.028 } 00:41:37.028 } 00:41:37.028 ] 00:41:37.028 }, 00:41:37.028 { 00:41:37.028 "subsystem": "keyring", 00:41:37.028 "config": [] 00:41:37.028 }, 00:41:37.028 { 00:41:37.028 "subsystem": "iobuf", 00:41:37.028 "config": [ 00:41:37.028 { 00:41:37.028 "method": "iobuf_set_options", 00:41:37.028 "params": { 00:41:37.028 "enable_numa": false, 00:41:37.028 "large_bufsize": 135168, 00:41:37.028 "large_pool_count": 1024, 00:41:37.028 "small_bufsize": 8192, 00:41:37.028 "small_pool_count": 8192 00:41:37.028 } 00:41:37.028 } 00:41:37.028 ] 00:41:37.028 }, 00:41:37.028 { 00:41:37.028 "subsystem": "sock", 00:41:37.028 "config": [ 00:41:37.028 { 00:41:37.028 "method": "sock_set_default_impl", 00:41:37.028 "params": { 00:41:37.028 "impl_name": "posix" 00:41:37.028 } 00:41:37.028 }, 00:41:37.028 { 00:41:37.029 "method": "sock_impl_set_options", 00:41:37.029 "params": { 00:41:37.029 "enable_ktls": false, 00:41:37.029 "enable_placement_id": 0, 00:41:37.029 "enable_quickack": false, 00:41:37.029 "enable_recv_pipe": true, 00:41:37.029 "enable_zerocopy_send_client": false, 00:41:37.029 "enable_zerocopy_send_server": true, 00:41:37.029 "impl_name": "ssl", 00:41:37.029 "recv_buf_size": 4096, 00:41:37.029 "send_buf_size": 4096, 00:41:37.029 "tls_version": 0, 00:41:37.029 "zerocopy_threshold": 0 00:41:37.029 } 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "method": "sock_impl_set_options", 00:41:37.029 "params": { 00:41:37.029 "enable_ktls": false, 00:41:37.029 "enable_placement_id": 0, 00:41:37.029 "enable_quickack": false, 00:41:37.029 "enable_recv_pipe": true, 00:41:37.029 "enable_zerocopy_send_client": false, 00:41:37.029 "enable_zerocopy_send_server": true, 00:41:37.029 "impl_name": "posix", 00:41:37.029 "recv_buf_size": 2097152, 00:41:37.029 "send_buf_size": 2097152, 00:41:37.029 "tls_version": 0, 00:41:37.029 "zerocopy_threshold": 0 00:41:37.029 } 00:41:37.029 } 00:41:37.029 ] 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "subsystem": "vmd", 00:41:37.029 "config": [] 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "subsystem": "accel", 00:41:37.029 "config": [ 00:41:37.029 { 00:41:37.029 "method": "accel_set_options", 00:41:37.029 "params": { 00:41:37.029 "buf_count": 2048, 00:41:37.029 "large_cache_size": 16, 00:41:37.029 "sequence_count": 2048, 00:41:37.029 "small_cache_size": 128, 00:41:37.029 "task_count": 2048 00:41:37.029 } 00:41:37.029 } 00:41:37.029 ] 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "subsystem": "bdev", 00:41:37.029 "config": [ 00:41:37.029 { 00:41:37.029 "method": "bdev_set_options", 00:41:37.029 "params": { 00:41:37.029 "bdev_auto_examine": true, 00:41:37.029 "bdev_io_cache_size": 256, 00:41:37.029 "bdev_io_pool_size": 65535, 00:41:37.029 "iobuf_large_cache_size": 16, 00:41:37.029 "iobuf_small_cache_size": 128 00:41:37.029 } 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "method": "bdev_raid_set_options", 00:41:37.029 "params": { 00:41:37.029 "process_max_bandwidth_mb_sec": 0, 00:41:37.029 "process_window_size_kb": 1024 00:41:37.029 } 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "method": "bdev_iscsi_set_options", 00:41:37.029 "params": { 00:41:37.029 "timeout_sec": 30 00:41:37.029 } 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "method": "bdev_nvme_set_options", 00:41:37.029 "params": { 00:41:37.029 "action_on_timeout": "none", 00:41:37.029 "allow_accel_sequence": false, 00:41:37.029 "arbitration_burst": 0, 00:41:37.029 "bdev_retry_count": 3, 00:41:37.029 "ctrlr_loss_timeout_sec": 0, 00:41:37.029 "delay_cmd_submit": true, 00:41:37.029 "dhchap_dhgroups": [ 00:41:37.029 "null", 00:41:37.029 "ffdhe2048", 00:41:37.029 "ffdhe3072", 00:41:37.029 "ffdhe4096", 00:41:37.029 "ffdhe6144", 00:41:37.029 "ffdhe8192" 00:41:37.029 ], 00:41:37.029 "dhchap_digests": [ 00:41:37.029 "sha256", 00:41:37.029 "sha384", 00:41:37.029 "sha512" 00:41:37.029 ], 00:41:37.029 "disable_auto_failback": false, 00:41:37.029 "fast_io_fail_timeout_sec": 0, 00:41:37.029 "generate_uuids": false, 00:41:37.029 "high_priority_weight": 0, 00:41:37.029 "io_path_stat": false, 00:41:37.029 "io_queue_requests": 0, 00:41:37.029 "keep_alive_timeout_ms": 10000, 00:41:37.029 "low_priority_weight": 0, 00:41:37.029 "medium_priority_weight": 0, 00:41:37.029 "nvme_adminq_poll_period_us": 10000, 00:41:37.029 "nvme_error_stat": false, 00:41:37.029 "nvme_ioq_poll_period_us": 0, 00:41:37.029 "rdma_cm_event_timeout_ms": 0, 00:41:37.029 "rdma_max_cq_size": 0, 00:41:37.029 "rdma_srq_size": 0, 00:41:37.029 "reconnect_delay_sec": 0, 00:41:37.029 "timeout_admin_us": 0, 00:41:37.029 "timeout_us": 0, 00:41:37.029 "transport_ack_timeout": 0, 00:41:37.029 "transport_retry_count": 4, 00:41:37.029 "transport_tos": 0 00:41:37.029 } 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "method": "bdev_nvme_set_hotplug", 00:41:37.029 "params": { 00:41:37.029 "enable": false, 00:41:37.029 "period_us": 100000 00:41:37.029 } 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "method": "bdev_wait_for_examine" 00:41:37.029 } 00:41:37.029 ] 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "subsystem": "scsi", 00:41:37.029 "config": null 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "subsystem": "scheduler", 00:41:37.029 "config": [ 00:41:37.029 { 00:41:37.029 "method": "framework_set_scheduler", 00:41:37.029 "params": { 00:41:37.029 "name": "static" 00:41:37.029 } 00:41:37.029 } 00:41:37.029 ] 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "subsystem": "vhost_scsi", 00:41:37.029 "config": [] 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "subsystem": "vhost_blk", 00:41:37.029 "config": [] 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "subsystem": "ublk", 00:41:37.029 "config": [] 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "subsystem": "nbd", 00:41:37.029 "config": [] 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "subsystem": "nvmf", 00:41:37.029 "config": [ 00:41:37.029 { 00:41:37.029 "method": "nvmf_set_config", 00:41:37.029 "params": { 00:41:37.029 "admin_cmd_passthru": { 00:41:37.029 "identify_ctrlr": false 00:41:37.029 }, 00:41:37.029 "dhchap_dhgroups": [ 00:41:37.029 "null", 00:41:37.029 "ffdhe2048", 00:41:37.029 "ffdhe3072", 00:41:37.029 "ffdhe4096", 00:41:37.029 "ffdhe6144", 00:41:37.029 "ffdhe8192" 00:41:37.029 ], 00:41:37.029 "dhchap_digests": [ 00:41:37.029 "sha256", 00:41:37.029 "sha384", 00:41:37.029 "sha512" 00:41:37.029 ], 00:41:37.029 "discovery_filter": "match_any" 00:41:37.029 } 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "method": "nvmf_set_max_subsystems", 00:41:37.029 "params": { 00:41:37.029 "max_subsystems": 1024 00:41:37.029 } 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "method": "nvmf_set_crdt", 00:41:37.029 "params": { 00:41:37.029 "crdt1": 0, 00:41:37.029 "crdt2": 0, 00:41:37.029 "crdt3": 0 00:41:37.029 } 00:41:37.029 }, 00:41:37.029 { 00:41:37.029 "method": "nvmf_create_transport", 00:41:37.029 "params": { 00:41:37.029 "abort_timeout_sec": 1, 00:41:37.029 "ack_timeout": 0, 00:41:37.029 "buf_cache_size": 4294967295, 00:41:37.030 "c2h_success": true, 00:41:37.030 "data_wr_pool_size": 0, 00:41:37.030 "dif_insert_or_strip": false, 00:41:37.030 "in_capsule_data_size": 4096, 00:41:37.030 "io_unit_size": 131072, 00:41:37.030 "max_aq_depth": 128, 00:41:37.030 "max_io_qpairs_per_ctrlr": 127, 00:41:37.030 "max_io_size": 131072, 00:41:37.030 "max_queue_depth": 128, 00:41:37.030 "num_shared_buffers": 511, 00:41:37.030 "sock_priority": 0, 00:41:37.030 "trtype": "TCP", 00:41:37.030 "zcopy": false 00:41:37.030 } 00:41:37.030 } 00:41:37.030 ] 00:41:37.030 }, 00:41:37.030 { 00:41:37.030 "subsystem": "iscsi", 00:41:37.030 "config": [ 00:41:37.030 { 00:41:37.030 "method": "iscsi_set_options", 00:41:37.030 "params": { 00:41:37.030 "allow_duplicated_isid": false, 00:41:37.030 "chap_group": 0, 00:41:37.030 "data_out_pool_size": 2048, 00:41:37.030 "default_time2retain": 20, 00:41:37.030 "default_time2wait": 2, 00:41:37.030 "disable_chap": false, 00:41:37.030 "error_recovery_level": 0, 00:41:37.030 "first_burst_length": 8192, 00:41:37.030 "immediate_data": true, 00:41:37.030 "immediate_data_pool_size": 16384, 00:41:37.030 "max_connections_per_session": 2, 00:41:37.030 "max_large_datain_per_connection": 64, 00:41:37.030 "max_queue_depth": 64, 00:41:37.030 "max_r2t_per_connection": 4, 00:41:37.030 "max_sessions": 128, 00:41:37.030 "mutual_chap": false, 00:41:37.030 "node_base": "iqn.2016-06.io.spdk", 00:41:37.030 "nop_in_interval": 30, 00:41:37.030 "nop_timeout": 60, 00:41:37.030 "pdu_pool_size": 36864, 00:41:37.030 "require_chap": false 00:41:37.030 } 00:41:37.030 } 00:41:37.030 ] 00:41:37.030 } 00:41:37.030 ] 00:41:37.030 } 00:41:37.030 09:52:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:41:37.030 09:52:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59310 00:41:37.030 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59310 ']' 00:41:37.030 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59310 00:41:37.030 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:41:37.030 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:37.030 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59310 00:41:37.030 killing process with pid 59310 00:41:37.030 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:37.030 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:37.030 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59310' 00:41:37.030 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59310 00:41:37.030 09:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59310 00:41:37.289 09:52:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:41:37.289 09:52:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59343 00:41:37.289 09:52:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:41:42.582 09:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59343 00:41:42.582 09:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59343 ']' 00:41:42.582 09:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59343 00:41:42.582 09:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:41:42.582 09:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:42.582 09:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59343 00:41:42.582 killing process with pid 59343 00:41:42.582 09:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:42.582 09:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:42.582 09:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59343' 00:41:42.582 09:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59343 00:41:42.582 09:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59343 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:41:42.843 00:41:42.843 real 0m6.857s 00:41:42.843 user 0m6.655s 00:41:42.843 sys 0m0.549s 00:41:42.843 ************************************ 00:41:42.843 END TEST skip_rpc_with_json 00:41:42.843 ************************************ 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:41:42.843 09:52:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:41:42.843 09:52:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:42.843 09:52:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:42.843 09:52:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:42.843 ************************************ 00:41:42.843 START TEST skip_rpc_with_delay 00:41:42.843 ************************************ 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:41:42.843 [2024-12-09 09:52:49.775238] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:42.843 ************************************ 00:41:42.843 END TEST skip_rpc_with_delay 00:41:42.843 ************************************ 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:42.843 00:41:42.843 real 0m0.085s 00:41:42.843 user 0m0.049s 00:41:42.843 sys 0m0.035s 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:42.843 09:52:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:41:42.843 09:52:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:41:42.843 09:52:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:41:42.843 09:52:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:41:42.843 09:52:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:42.843 09:52:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:42.843 09:52:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:42.843 ************************************ 00:41:42.843 START TEST exit_on_failed_rpc_init 00:41:42.843 ************************************ 00:41:42.843 09:52:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:41:42.843 09:52:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59458 00:41:42.843 09:52:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:41:42.843 09:52:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59458 00:41:42.843 09:52:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59458 ']' 00:41:42.843 09:52:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:42.843 09:52:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:42.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:42.843 09:52:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:42.843 09:52:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:42.843 09:52:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:41:43.103 [2024-12-09 09:52:49.917172] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:43.103 [2024-12-09 09:52:49.917622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59458 ] 00:41:43.103 [2024-12-09 09:52:50.066053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:43.103 [2024-12-09 09:52:50.119010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:41:44.041 09:52:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:41:44.041 [2024-12-09 09:52:50.903739] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:44.041 [2024-12-09 09:52:50.903875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59488 ] 00:41:44.041 [2024-12-09 09:52:51.053838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:44.301 [2024-12-09 09:52:51.137141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:44.301 [2024-12-09 09:52:51.137340] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:41:44.301 [2024-12-09 09:52:51.137385] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:41:44.301 [2024-12-09 09:52:51.137404] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59458 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59458 ']' 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59458 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59458 00:41:44.301 killing process with pid 59458 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59458' 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59458 00:41:44.301 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59458 00:41:44.560 00:41:44.560 real 0m1.721s 00:41:44.560 user 0m2.007s 00:41:44.560 sys 0m0.393s 00:41:44.560 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:44.560 ************************************ 00:41:44.560 END TEST exit_on_failed_rpc_init 00:41:44.560 ************************************ 00:41:44.560 09:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:41:44.819 09:52:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:41:44.819 ************************************ 00:41:44.819 END TEST skip_rpc 00:41:44.819 ************************************ 00:41:44.819 00:41:44.819 real 0m14.516s 00:41:44.819 user 0m13.982s 00:41:44.819 sys 0m1.502s 00:41:44.819 09:52:51 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:44.819 09:52:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:44.819 09:52:51 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:41:44.819 09:52:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:44.819 09:52:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:44.819 09:52:51 -- common/autotest_common.sh@10 -- # set +x 00:41:44.819 ************************************ 00:41:44.819 START TEST rpc_client 00:41:44.819 ************************************ 00:41:44.819 09:52:51 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:41:44.819 * Looking for test storage... 00:41:44.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:41:44.819 09:52:51 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:44.819 09:52:51 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:41:44.819 09:52:51 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:45.080 09:52:51 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@345 -- # : 1 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@353 -- # local d=1 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@355 -- # echo 1 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@353 -- # local d=2 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@355 -- # echo 2 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:45.080 09:52:51 rpc_client -- scripts/common.sh@368 -- # return 0 00:41:45.080 09:52:51 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:45.080 09:52:51 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:45.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.080 --rc genhtml_branch_coverage=1 00:41:45.080 --rc genhtml_function_coverage=1 00:41:45.080 --rc genhtml_legend=1 00:41:45.080 --rc geninfo_all_blocks=1 00:41:45.080 --rc geninfo_unexecuted_blocks=1 00:41:45.080 00:41:45.080 ' 00:41:45.080 09:52:51 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:45.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.080 --rc genhtml_branch_coverage=1 00:41:45.080 --rc genhtml_function_coverage=1 00:41:45.080 --rc genhtml_legend=1 00:41:45.080 --rc geninfo_all_blocks=1 00:41:45.080 --rc geninfo_unexecuted_blocks=1 00:41:45.080 00:41:45.080 ' 00:41:45.080 09:52:51 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:45.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.080 --rc genhtml_branch_coverage=1 00:41:45.080 --rc genhtml_function_coverage=1 00:41:45.080 --rc genhtml_legend=1 00:41:45.080 --rc geninfo_all_blocks=1 00:41:45.080 --rc geninfo_unexecuted_blocks=1 00:41:45.080 00:41:45.080 ' 00:41:45.080 09:52:51 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:45.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.080 --rc genhtml_branch_coverage=1 00:41:45.080 --rc genhtml_function_coverage=1 00:41:45.080 --rc genhtml_legend=1 00:41:45.080 --rc geninfo_all_blocks=1 00:41:45.080 --rc geninfo_unexecuted_blocks=1 00:41:45.080 00:41:45.080 ' 00:41:45.080 09:52:51 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:41:45.080 OK 00:41:45.080 09:52:51 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:41:45.080 00:41:45.080 real 0m0.258s 00:41:45.080 user 0m0.142s 00:41:45.080 sys 0m0.132s 00:41:45.080 09:52:51 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:45.080 09:52:51 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:41:45.080 ************************************ 00:41:45.080 END TEST rpc_client 00:41:45.080 ************************************ 00:41:45.080 09:52:52 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:41:45.080 09:52:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:45.080 09:52:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:45.080 09:52:52 -- common/autotest_common.sh@10 -- # set +x 00:41:45.080 ************************************ 00:41:45.080 START TEST json_config 00:41:45.080 ************************************ 00:41:45.080 09:52:52 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:41:45.080 09:52:52 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:45.080 09:52:52 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:41:45.080 09:52:52 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:45.341 09:52:52 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:45.341 09:52:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:45.341 09:52:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:45.341 09:52:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:45.341 09:52:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:41:45.341 09:52:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:41:45.341 09:52:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:41:45.341 09:52:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:41:45.341 09:52:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:41:45.341 09:52:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:41:45.341 09:52:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:41:45.341 09:52:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:45.341 09:52:52 json_config -- scripts/common.sh@344 -- # case "$op" in 00:41:45.341 09:52:52 json_config -- scripts/common.sh@345 -- # : 1 00:41:45.341 09:52:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:45.341 09:52:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:45.341 09:52:52 json_config -- scripts/common.sh@365 -- # decimal 1 00:41:45.341 09:52:52 json_config -- scripts/common.sh@353 -- # local d=1 00:41:45.341 09:52:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:45.341 09:52:52 json_config -- scripts/common.sh@355 -- # echo 1 00:41:45.341 09:52:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:41:45.341 09:52:52 json_config -- scripts/common.sh@366 -- # decimal 2 00:41:45.341 09:52:52 json_config -- scripts/common.sh@353 -- # local d=2 00:41:45.341 09:52:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:45.341 09:52:52 json_config -- scripts/common.sh@355 -- # echo 2 00:41:45.341 09:52:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:41:45.341 09:52:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:45.341 09:52:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:45.341 09:52:52 json_config -- scripts/common.sh@368 -- # return 0 00:41:45.341 09:52:52 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:45.341 09:52:52 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:45.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.341 --rc genhtml_branch_coverage=1 00:41:45.341 --rc genhtml_function_coverage=1 00:41:45.341 --rc genhtml_legend=1 00:41:45.341 --rc geninfo_all_blocks=1 00:41:45.341 --rc geninfo_unexecuted_blocks=1 00:41:45.341 00:41:45.341 ' 00:41:45.341 09:52:52 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:45.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.341 --rc genhtml_branch_coverage=1 00:41:45.341 --rc genhtml_function_coverage=1 00:41:45.341 --rc genhtml_legend=1 00:41:45.341 --rc geninfo_all_blocks=1 00:41:45.341 --rc geninfo_unexecuted_blocks=1 00:41:45.341 00:41:45.341 ' 00:41:45.341 09:52:52 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:45.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.341 --rc genhtml_branch_coverage=1 00:41:45.341 --rc genhtml_function_coverage=1 00:41:45.341 --rc genhtml_legend=1 00:41:45.341 --rc geninfo_all_blocks=1 00:41:45.341 --rc geninfo_unexecuted_blocks=1 00:41:45.341 00:41:45.341 ' 00:41:45.341 09:52:52 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:45.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.342 --rc genhtml_branch_coverage=1 00:41:45.342 --rc genhtml_function_coverage=1 00:41:45.342 --rc genhtml_legend=1 00:41:45.342 --rc geninfo_all_blocks=1 00:41:45.342 --rc geninfo_unexecuted_blocks=1 00:41:45.342 00:41:45.342 ' 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:45.342 09:52:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:41:45.342 09:52:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:45.342 09:52:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:45.342 09:52:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:45.342 09:52:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.342 09:52:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.342 09:52:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.342 09:52:52 json_config -- paths/export.sh@5 -- # export PATH 00:41:45.342 09:52:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@51 -- # : 0 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:45.342 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:45.342 09:52:52 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:41:45.342 INFO: JSON configuration test init 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:41:45.342 09:52:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:45.342 09:52:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:41:45.342 09:52:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:45.342 09:52:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:45.342 09:52:52 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:41:45.342 09:52:52 json_config -- json_config/common.sh@9 -- # local app=target 00:41:45.342 09:52:52 json_config -- json_config/common.sh@10 -- # shift 00:41:45.342 09:52:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:41:45.342 09:52:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:41:45.342 09:52:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:41:45.342 09:52:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:41:45.342 09:52:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:41:45.342 09:52:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59622 00:41:45.342 09:52:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:41:45.342 Waiting for target to run... 00:41:45.342 09:52:52 json_config -- json_config/common.sh@25 -- # waitforlisten 59622 /var/tmp/spdk_tgt.sock 00:41:45.342 09:52:52 json_config -- common/autotest_common.sh@835 -- # '[' -z 59622 ']' 00:41:45.342 09:52:52 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:41:45.342 09:52:52 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:45.342 09:52:52 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:41:45.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:41:45.342 09:52:52 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:41:45.342 09:52:52 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:45.342 09:52:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:45.342 [2024-12-09 09:52:52.331406] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:45.342 [2024-12-09 09:52:52.331611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59622 ] 00:41:45.910 [2024-12-09 09:52:52.699917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.910 [2024-12-09 09:52:52.744355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:46.479 09:52:53 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:46.479 09:52:53 json_config -- common/autotest_common.sh@868 -- # return 0 00:41:46.479 09:52:53 json_config -- json_config/common.sh@26 -- # echo '' 00:41:46.479 00:41:46.479 09:52:53 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:41:46.479 09:52:53 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:41:46.479 09:52:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:46.479 09:52:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:46.479 09:52:53 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:41:46.479 09:52:53 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:41:46.479 09:52:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:46.479 09:52:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:46.479 09:52:53 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:41:46.479 09:52:53 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:41:46.479 09:52:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:41:46.738 09:52:53 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:41:46.738 09:52:53 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:41:46.738 09:52:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:46.738 09:52:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:46.738 09:52:53 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:41:46.738 09:52:53 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:41:46.738 09:52:53 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:41:46.738 09:52:53 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:41:46.738 09:52:53 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:41:46.738 09:52:53 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:41:46.738 09:52:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:41:46.738 09:52:53 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:41:46.998 09:52:54 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:41:46.998 09:52:54 json_config -- json_config/json_config.sh@51 -- # local get_types 00:41:46.998 09:52:54 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:41:46.998 09:52:54 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:41:46.998 09:52:54 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:41:46.998 09:52:54 json_config -- json_config/json_config.sh@54 -- # sort 00:41:46.998 09:52:54 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:41:46.998 09:52:54 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:41:46.998 09:52:54 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:41:46.998 09:52:54 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:41:46.998 09:52:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:46.998 09:52:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:47.257 09:52:54 json_config -- json_config/json_config.sh@62 -- # return 0 00:41:47.257 09:52:54 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:41:47.257 09:52:54 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:41:47.257 09:52:54 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:41:47.257 09:52:54 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:41:47.257 09:52:54 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:41:47.257 09:52:54 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:41:47.257 09:52:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:47.257 09:52:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:47.257 09:52:54 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:41:47.257 09:52:54 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:41:47.257 09:52:54 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:41:47.257 09:52:54 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:41:47.257 09:52:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:41:47.518 MallocForNvmf0 00:41:47.518 09:52:54 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:41:47.518 09:52:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:41:47.518 MallocForNvmf1 00:41:47.778 09:52:54 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:41:47.778 09:52:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:41:47.778 [2024-12-09 09:52:54.773126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:47.778 09:52:54 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:47.778 09:52:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:48.037 09:52:55 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:41:48.037 09:52:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:41:48.296 09:52:55 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:41:48.296 09:52:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:41:48.581 09:52:55 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:41:48.581 09:52:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:41:48.841 [2024-12-09 09:52:55.659952] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:48.841 09:52:55 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:41:48.841 09:52:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:48.841 09:52:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:48.841 09:52:55 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:41:48.841 09:52:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:48.842 09:52:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:48.842 09:52:55 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:41:48.842 09:52:55 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:41:48.842 09:52:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:41:49.101 MallocBdevForConfigChangeCheck 00:41:49.101 09:52:56 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:41:49.101 09:52:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:49.101 09:52:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:49.101 09:52:56 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:41:49.101 09:52:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:41:49.676 09:52:56 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:41:49.676 INFO: shutting down applications... 00:41:49.676 09:52:56 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:41:49.676 09:52:56 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:41:49.676 09:52:56 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:41:49.676 09:52:56 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:41:49.936 Calling clear_iscsi_subsystem 00:41:49.936 Calling clear_nvmf_subsystem 00:41:49.936 Calling clear_nbd_subsystem 00:41:49.936 Calling clear_ublk_subsystem 00:41:49.936 Calling clear_vhost_blk_subsystem 00:41:49.936 Calling clear_vhost_scsi_subsystem 00:41:49.936 Calling clear_bdev_subsystem 00:41:49.936 09:52:56 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:41:49.936 09:52:56 json_config -- json_config/json_config.sh@350 -- # count=100 00:41:49.936 09:52:56 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:41:49.936 09:52:56 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:41:49.936 09:52:56 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:41:49.936 09:52:56 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:41:50.196 09:52:57 json_config -- json_config/json_config.sh@352 -- # break 00:41:50.196 09:52:57 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:41:50.196 09:52:57 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:41:50.196 09:52:57 json_config -- json_config/common.sh@31 -- # local app=target 00:41:50.196 09:52:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:41:50.196 09:52:57 json_config -- json_config/common.sh@35 -- # [[ -n 59622 ]] 00:41:50.196 09:52:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59622 00:41:50.196 09:52:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:41:50.196 09:52:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:41:50.196 09:52:57 json_config -- json_config/common.sh@41 -- # kill -0 59622 00:41:50.196 09:52:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:41:50.765 09:52:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:41:50.765 09:52:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:41:50.765 09:52:57 json_config -- json_config/common.sh@41 -- # kill -0 59622 00:41:50.765 09:52:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:41:50.765 09:52:57 json_config -- json_config/common.sh@43 -- # break 00:41:50.765 09:52:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:41:50.765 SPDK target shutdown done 00:41:50.765 09:52:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:41:50.765 INFO: relaunching applications... 00:41:50.765 09:52:57 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:41:50.765 09:52:57 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:41:50.765 09:52:57 json_config -- json_config/common.sh@9 -- # local app=target 00:41:50.765 09:52:57 json_config -- json_config/common.sh@10 -- # shift 00:41:50.765 09:52:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:41:50.765 09:52:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:41:50.765 09:52:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:41:50.765 09:52:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:41:50.765 09:52:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:41:50.765 09:52:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59897 00:41:50.765 09:52:57 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:41:50.765 09:52:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:41:50.765 Waiting for target to run... 00:41:50.765 09:52:57 json_config -- json_config/common.sh@25 -- # waitforlisten 59897 /var/tmp/spdk_tgt.sock 00:41:50.765 09:52:57 json_config -- common/autotest_common.sh@835 -- # '[' -z 59897 ']' 00:41:50.765 09:52:57 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:41:50.765 09:52:57 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:50.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:41:50.765 09:52:57 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:41:50.765 09:52:57 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:50.765 09:52:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:50.765 [2024-12-09 09:52:57.735211] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:50.765 [2024-12-09 09:52:57.735285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59897 ] 00:41:51.341 [2024-12-09 09:52:58.105075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:51.341 [2024-12-09 09:52:58.167462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:51.600 [2024-12-09 09:52:58.525841] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:51.600 [2024-12-09 09:52:58.557898] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:51.600 09:52:58 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:51.600 09:52:58 json_config -- common/autotest_common.sh@868 -- # return 0 00:41:51.600 00:41:51.600 09:52:58 json_config -- json_config/common.sh@26 -- # echo '' 00:41:51.600 09:52:58 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:41:51.600 INFO: Checking if target configuration is the same... 00:41:51.600 09:52:58 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:41:51.600 09:52:58 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:41:51.600 09:52:58 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:41:51.600 09:52:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:41:51.861 + '[' 2 -ne 2 ']' 00:41:51.861 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:41:51.861 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:41:51.861 + rootdir=/home/vagrant/spdk_repo/spdk 00:41:51.862 +++ basename /dev/fd/62 00:41:51.862 ++ mktemp /tmp/62.XXX 00:41:51.862 + tmp_file_1=/tmp/62.S19 00:41:51.862 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:41:51.862 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:41:51.862 + tmp_file_2=/tmp/spdk_tgt_config.json.cPV 00:41:51.862 + ret=0 00:41:51.862 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:41:52.121 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:41:52.122 + diff -u /tmp/62.S19 /tmp/spdk_tgt_config.json.cPV 00:41:52.122 + echo 'INFO: JSON config files are the same' 00:41:52.122 INFO: JSON config files are the same 00:41:52.122 + rm /tmp/62.S19 /tmp/spdk_tgt_config.json.cPV 00:41:52.122 + exit 0 00:41:52.122 09:52:59 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:41:52.122 09:52:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:41:52.122 INFO: changing configuration and checking if this can be detected... 00:41:52.122 09:52:59 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:41:52.122 09:52:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:41:52.381 09:52:59 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:41:52.381 09:52:59 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:41:52.381 09:52:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:41:52.381 + '[' 2 -ne 2 ']' 00:41:52.381 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:41:52.381 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:41:52.381 + rootdir=/home/vagrant/spdk_repo/spdk 00:41:52.381 +++ basename /dev/fd/62 00:41:52.381 ++ mktemp /tmp/62.XXX 00:41:52.381 + tmp_file_1=/tmp/62.KiL 00:41:52.381 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:41:52.381 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:41:52.381 + tmp_file_2=/tmp/spdk_tgt_config.json.yZ6 00:41:52.381 + ret=0 00:41:52.381 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:41:52.639 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:41:52.897 + diff -u /tmp/62.KiL /tmp/spdk_tgt_config.json.yZ6 00:41:52.897 + ret=1 00:41:52.897 + echo '=== Start of file: /tmp/62.KiL ===' 00:41:52.897 + cat /tmp/62.KiL 00:41:52.897 + echo '=== End of file: /tmp/62.KiL ===' 00:41:52.897 + echo '' 00:41:52.897 + echo '=== Start of file: /tmp/spdk_tgt_config.json.yZ6 ===' 00:41:52.897 + cat /tmp/spdk_tgt_config.json.yZ6 00:41:52.897 + echo '=== End of file: /tmp/spdk_tgt_config.json.yZ6 ===' 00:41:52.897 + echo '' 00:41:52.897 + rm /tmp/62.KiL /tmp/spdk_tgt_config.json.yZ6 00:41:52.897 + exit 1 00:41:52.897 INFO: configuration change detected. 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:41:52.897 09:52:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:52.897 09:52:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@324 -- # [[ -n 59897 ]] 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:41:52.897 09:52:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:52.897 09:52:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@200 -- # uname -s 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:41:52.897 09:52:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:52.897 09:52:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:52.897 09:52:59 json_config -- json_config/json_config.sh@330 -- # killprocess 59897 00:41:52.897 09:52:59 json_config -- common/autotest_common.sh@954 -- # '[' -z 59897 ']' 00:41:52.897 09:52:59 json_config -- common/autotest_common.sh@958 -- # kill -0 59897 00:41:52.898 09:52:59 json_config -- common/autotest_common.sh@959 -- # uname 00:41:52.898 09:52:59 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:52.898 09:52:59 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59897 00:41:52.898 09:52:59 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:52.898 09:52:59 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:52.898 killing process with pid 59897 00:41:52.898 09:52:59 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59897' 00:41:52.898 09:52:59 json_config -- common/autotest_common.sh@973 -- # kill 59897 00:41:52.898 09:52:59 json_config -- common/autotest_common.sh@978 -- # wait 59897 00:41:53.156 09:53:00 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:41:53.156 09:53:00 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:41:53.156 09:53:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:53.156 09:53:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:53.416 09:53:00 json_config -- json_config/json_config.sh@335 -- # return 0 00:41:53.416 INFO: Success 00:41:53.416 09:53:00 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:41:53.416 00:41:53.416 real 0m8.225s 00:41:53.416 user 0m11.468s 00:41:53.416 sys 0m1.986s 00:41:53.416 09:53:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:53.416 09:53:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:41:53.416 ************************************ 00:41:53.416 END TEST json_config 00:41:53.416 ************************************ 00:41:53.416 09:53:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:41:53.416 09:53:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:53.416 09:53:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:53.416 09:53:00 -- common/autotest_common.sh@10 -- # set +x 00:41:53.416 ************************************ 00:41:53.416 START TEST json_config_extra_key 00:41:53.416 ************************************ 00:41:53.416 09:53:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:41:53.416 09:53:00 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:53.416 09:53:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:41:53.416 09:53:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:53.676 09:53:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:53.676 09:53:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:41:53.676 09:53:00 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:53.676 09:53:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.676 --rc genhtml_branch_coverage=1 00:41:53.676 --rc genhtml_function_coverage=1 00:41:53.676 --rc genhtml_legend=1 00:41:53.676 --rc geninfo_all_blocks=1 00:41:53.676 --rc geninfo_unexecuted_blocks=1 00:41:53.676 00:41:53.676 ' 00:41:53.676 09:53:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.676 --rc genhtml_branch_coverage=1 00:41:53.676 --rc genhtml_function_coverage=1 00:41:53.676 --rc genhtml_legend=1 00:41:53.676 --rc geninfo_all_blocks=1 00:41:53.676 --rc geninfo_unexecuted_blocks=1 00:41:53.676 00:41:53.676 ' 00:41:53.676 09:53:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.676 --rc genhtml_branch_coverage=1 00:41:53.676 --rc genhtml_function_coverage=1 00:41:53.676 --rc genhtml_legend=1 00:41:53.676 --rc geninfo_all_blocks=1 00:41:53.676 --rc geninfo_unexecuted_blocks=1 00:41:53.676 00:41:53.676 ' 00:41:53.676 09:53:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.676 --rc genhtml_branch_coverage=1 00:41:53.676 --rc genhtml_function_coverage=1 00:41:53.676 --rc genhtml_legend=1 00:41:53.676 --rc geninfo_all_blocks=1 00:41:53.676 --rc geninfo_unexecuted_blocks=1 00:41:53.676 00:41:53.676 ' 00:41:53.676 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:53.676 09:53:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:41:53.677 09:53:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:53.677 09:53:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:53.677 09:53:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:41:53.677 09:53:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:53.677 09:53:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:53.677 09:53:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:53.677 09:53:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.677 09:53:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.677 09:53:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.677 09:53:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:41:53.677 09:53:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.677 09:53:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:41:53.677 09:53:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:53.677 09:53:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:53.677 09:53:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:53.677 09:53:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:53.677 09:53:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:53.677 09:53:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:53.677 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:53.677 09:53:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:53.677 09:53:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:53.677 09:53:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:53.677 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:41:53.677 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:41:53.677 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:41:53.677 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:41:53.677 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:41:53.677 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:41:53.677 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:41:53.677 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:41:53.677 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:41:53.677 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:41:53.677 INFO: launching applications... 00:41:53.677 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:41:53.677 09:53:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:41:53.677 09:53:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:41:53.677 09:53:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:41:53.677 09:53:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:41:53.677 09:53:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:41:53.677 09:53:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:41:53.677 09:53:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:41:53.677 09:53:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:41:53.677 09:53:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60081 00:41:53.677 Waiting for target to run... 00:41:53.677 09:53:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:41:53.677 09:53:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60081 /var/tmp/spdk_tgt.sock 00:41:53.677 09:53:00 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 60081 ']' 00:41:53.677 09:53:00 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:41:53.677 09:53:00 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:41:53.677 09:53:00 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:53.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:41:53.677 09:53:00 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:41:53.677 09:53:00 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:53.677 09:53:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:41:53.677 [2024-12-09 09:53:00.610017] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:53.677 [2024-12-09 09:53:00.610131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60081 ] 00:41:53.937 [2024-12-09 09:53:00.974557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:54.196 [2024-12-09 09:53:01.034159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:54.765 09:53:01 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:54.765 09:53:01 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:41:54.765 00:41:54.765 09:53:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:41:54.765 INFO: shutting down applications... 00:41:54.765 09:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:41:54.765 09:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:41:54.765 09:53:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:41:54.765 09:53:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:41:54.765 09:53:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60081 ]] 00:41:54.765 09:53:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60081 00:41:54.765 09:53:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:41:54.765 09:53:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:41:54.765 09:53:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60081 00:41:54.765 09:53:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:41:55.025 09:53:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:41:55.025 09:53:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:41:55.025 09:53:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60081 00:41:55.025 09:53:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:41:55.594 09:53:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:41:55.594 09:53:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:41:55.594 09:53:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60081 00:41:55.594 09:53:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:41:55.594 09:53:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:41:55.594 09:53:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:41:55.594 SPDK target shutdown done 00:41:55.594 09:53:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:41:55.594 Success 00:41:55.594 09:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:41:55.594 00:41:55.594 real 0m2.243s 00:41:55.594 user 0m1.802s 00:41:55.594 sys 0m0.444s 00:41:55.594 09:53:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:55.594 09:53:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:41:55.594 ************************************ 00:41:55.594 END TEST json_config_extra_key 00:41:55.594 ************************************ 00:41:55.594 09:53:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:41:55.594 09:53:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:55.594 09:53:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:55.594 09:53:02 -- common/autotest_common.sh@10 -- # set +x 00:41:55.594 ************************************ 00:41:55.594 START TEST alias_rpc 00:41:55.594 ************************************ 00:41:55.594 09:53:02 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:41:55.861 * Looking for test storage... 00:41:55.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:41:55.861 09:53:02 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:55.861 09:53:02 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:41:55.861 09:53:02 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:55.861 09:53:02 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:55.861 09:53:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:41:55.861 09:53:02 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:55.861 09:53:02 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:55.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.861 --rc genhtml_branch_coverage=1 00:41:55.861 --rc genhtml_function_coverage=1 00:41:55.861 --rc genhtml_legend=1 00:41:55.861 --rc geninfo_all_blocks=1 00:41:55.861 --rc geninfo_unexecuted_blocks=1 00:41:55.861 00:41:55.861 ' 00:41:55.861 09:53:02 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:55.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.861 --rc genhtml_branch_coverage=1 00:41:55.861 --rc genhtml_function_coverage=1 00:41:55.861 --rc genhtml_legend=1 00:41:55.861 --rc geninfo_all_blocks=1 00:41:55.861 --rc geninfo_unexecuted_blocks=1 00:41:55.861 00:41:55.861 ' 00:41:55.861 09:53:02 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:55.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.861 --rc genhtml_branch_coverage=1 00:41:55.861 --rc genhtml_function_coverage=1 00:41:55.861 --rc genhtml_legend=1 00:41:55.861 --rc geninfo_all_blocks=1 00:41:55.861 --rc geninfo_unexecuted_blocks=1 00:41:55.861 00:41:55.861 ' 00:41:55.861 09:53:02 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:55.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.861 --rc genhtml_branch_coverage=1 00:41:55.861 --rc genhtml_function_coverage=1 00:41:55.861 --rc genhtml_legend=1 00:41:55.861 --rc geninfo_all_blocks=1 00:41:55.861 --rc geninfo_unexecuted_blocks=1 00:41:55.862 00:41:55.862 ' 00:41:55.862 09:53:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:41:55.862 09:53:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60172 00:41:55.862 09:53:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:55.862 09:53:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60172 00:41:55.862 09:53:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 60172 ']' 00:41:55.862 09:53:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:55.862 09:53:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:55.862 09:53:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:55.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:55.862 09:53:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:55.862 09:53:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:56.121 [2024-12-09 09:53:02.914278] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:56.121 [2024-12-09 09:53:02.914367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60172 ] 00:41:56.121 [2024-12-09 09:53:03.042534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:56.121 [2024-12-09 09:53:03.115384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:57.062 09:53:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:57.062 09:53:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:41:57.062 09:53:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:41:57.062 09:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60172 00:41:57.062 09:53:04 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 60172 ']' 00:41:57.062 09:53:04 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 60172 00:41:57.062 09:53:04 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:41:57.062 09:53:04 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:57.062 09:53:04 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60172 00:41:57.321 09:53:04 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:57.321 09:53:04 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:57.321 killing process with pid 60172 00:41:57.321 09:53:04 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60172' 00:41:57.321 09:53:04 alias_rpc -- common/autotest_common.sh@973 -- # kill 60172 00:41:57.321 09:53:04 alias_rpc -- common/autotest_common.sh@978 -- # wait 60172 00:41:57.891 00:41:57.891 real 0m2.060s 00:41:57.891 user 0m2.072s 00:41:57.891 sys 0m0.622s 00:41:57.891 09:53:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:57.891 09:53:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:57.891 ************************************ 00:41:57.891 END TEST alias_rpc 00:41:57.891 ************************************ 00:41:57.891 09:53:04 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:41:57.891 09:53:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:41:57.891 09:53:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:57.891 09:53:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:57.891 09:53:04 -- common/autotest_common.sh@10 -- # set +x 00:41:57.891 ************************************ 00:41:57.891 START TEST dpdk_mem_utility 00:41:57.891 ************************************ 00:41:57.891 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:41:57.891 * Looking for test storage... 00:41:57.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:41:57.891 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:57.891 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:57.891 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:41:58.149 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:58.149 09:53:04 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:41:58.149 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:58.149 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:58.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:58.149 --rc genhtml_branch_coverage=1 00:41:58.149 --rc genhtml_function_coverage=1 00:41:58.149 --rc genhtml_legend=1 00:41:58.149 --rc geninfo_all_blocks=1 00:41:58.149 --rc geninfo_unexecuted_blocks=1 00:41:58.149 00:41:58.149 ' 00:41:58.149 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:58.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:58.149 --rc genhtml_branch_coverage=1 00:41:58.149 --rc genhtml_function_coverage=1 00:41:58.149 --rc genhtml_legend=1 00:41:58.149 --rc geninfo_all_blocks=1 00:41:58.149 --rc geninfo_unexecuted_blocks=1 00:41:58.149 00:41:58.149 ' 00:41:58.149 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:58.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:58.149 --rc genhtml_branch_coverage=1 00:41:58.149 --rc genhtml_function_coverage=1 00:41:58.149 --rc genhtml_legend=1 00:41:58.149 --rc geninfo_all_blocks=1 00:41:58.149 --rc geninfo_unexecuted_blocks=1 00:41:58.149 00:41:58.149 ' 00:41:58.149 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:58.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:58.149 --rc genhtml_branch_coverage=1 00:41:58.149 --rc genhtml_function_coverage=1 00:41:58.149 --rc genhtml_legend=1 00:41:58.149 --rc geninfo_all_blocks=1 00:41:58.149 --rc geninfo_unexecuted_blocks=1 00:41:58.149 00:41:58.149 ' 00:41:58.149 09:53:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:41:58.149 09:53:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60272 00:41:58.149 09:53:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:58.149 09:53:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60272 00:41:58.150 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60272 ']' 00:41:58.150 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:58.150 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:58.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:58.150 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:58.150 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:58.150 09:53:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:41:58.150 [2024-12-09 09:53:05.026752] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:41:58.150 [2024-12-09 09:53:05.026835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60272 ] 00:41:58.150 [2024-12-09 09:53:05.173724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:58.407 [2024-12-09 09:53:05.251909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:58.973 09:53:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:58.973 09:53:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:41:58.973 09:53:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:41:58.973 09:53:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:41:58.973 09:53:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.973 09:53:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:41:58.973 { 00:41:58.973 "filename": "/tmp/spdk_mem_dump.txt" 00:41:58.973 } 00:41:58.973 09:53:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.973 09:53:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:41:59.234 DPDK memory size 818.000000 MiB in 1 heap(s) 00:41:59.234 1 heaps totaling size 818.000000 MiB 00:41:59.234 size: 818.000000 MiB heap id: 0 00:41:59.234 end heaps---------- 00:41:59.234 9 mempools totaling size 603.782043 MiB 00:41:59.234 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:41:59.234 size: 158.602051 MiB name: PDU_data_out_Pool 00:41:59.234 size: 100.555481 MiB name: bdev_io_60272 00:41:59.234 size: 50.003479 MiB name: msgpool_60272 00:41:59.234 size: 36.509338 MiB name: fsdev_io_60272 00:41:59.234 size: 21.763794 MiB name: PDU_Pool 00:41:59.234 size: 19.513306 MiB name: SCSI_TASK_Pool 00:41:59.234 size: 4.133484 MiB name: evtpool_60272 00:41:59.234 size: 0.026123 MiB name: Session_Pool 00:41:59.234 end mempools------- 00:41:59.234 6 memzones totaling size 4.142822 MiB 00:41:59.234 size: 1.000366 MiB name: RG_ring_0_60272 00:41:59.234 size: 1.000366 MiB name: RG_ring_1_60272 00:41:59.234 size: 1.000366 MiB name: RG_ring_4_60272 00:41:59.234 size: 1.000366 MiB name: RG_ring_5_60272 00:41:59.234 size: 0.125366 MiB name: RG_ring_2_60272 00:41:59.234 size: 0.015991 MiB name: RG_ring_3_60272 00:41:59.234 end memzones------- 00:41:59.234 09:53:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:41:59.234 heap id: 0 total size: 818.000000 MiB number of busy elements: 227 number of free elements: 15 00:41:59.234 list of free elements. size: 10.818970 MiB 00:41:59.234 element at address: 0x200019200000 with size: 0.999878 MiB 00:41:59.234 element at address: 0x200019400000 with size: 0.999878 MiB 00:41:59.234 element at address: 0x200000400000 with size: 0.996155 MiB 00:41:59.234 element at address: 0x200032000000 with size: 0.994446 MiB 00:41:59.234 element at address: 0x200006400000 with size: 0.959839 MiB 00:41:59.234 element at address: 0x200012c00000 with size: 0.944275 MiB 00:41:59.234 element at address: 0x200019600000 with size: 0.936584 MiB 00:41:59.234 element at address: 0x200000200000 with size: 0.717346 MiB 00:41:59.234 element at address: 0x20001ae00000 with size: 0.572632 MiB 00:41:59.234 element at address: 0x200000c00000 with size: 0.490845 MiB 00:41:59.234 element at address: 0x20000a600000 with size: 0.489807 MiB 00:41:59.234 element at address: 0x200019800000 with size: 0.485657 MiB 00:41:59.234 element at address: 0x200003e00000 with size: 0.481201 MiB 00:41:59.234 element at address: 0x200028200000 with size: 0.397034 MiB 00:41:59.234 element at address: 0x200000800000 with size: 0.353394 MiB 00:41:59.234 list of standard malloc elements. size: 199.252136 MiB 00:41:59.234 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:41:59.234 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:41:59.234 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:41:59.234 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:41:59.234 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:41:59.234 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:41:59.234 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:41:59.234 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:41:59.234 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:41:59.234 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000085a780 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000085a980 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000087f080 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000087f140 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000087f200 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000087f380 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000087f440 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000087f500 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x20000087f680 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:41:59.234 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000cff000 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:41:59.234 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x200003efb980 with size: 0.000183 MiB 00:41:59.235 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:41:59.235 element at address: 0x200028265a40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x200028265b00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826c700 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826c900 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d080 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d140 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d200 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d380 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d440 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d500 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d680 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d740 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d800 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826d980 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826da40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826db00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826de00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826df80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e040 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e100 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e280 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e340 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e400 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e580 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e640 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e700 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e880 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826e940 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f000 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f180 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f240 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f300 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f480 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f540 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f600 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f780 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f840 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f900 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:41:59.235 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:41:59.235 list of memzone associated elements. size: 607.928894 MiB 00:41:59.235 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:41:59.235 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:41:59.235 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:41:59.236 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:41:59.236 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:41:59.236 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60272_0 00:41:59.236 element at address: 0x200000dff380 with size: 48.003052 MiB 00:41:59.236 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60272_0 00:41:59.236 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:41:59.236 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60272_0 00:41:59.236 element at address: 0x2000199be940 with size: 20.255554 MiB 00:41:59.236 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:41:59.236 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:41:59.236 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:41:59.236 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:41:59.236 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60272_0 00:41:59.236 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:41:59.236 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60272 00:41:59.236 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:41:59.236 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60272 00:41:59.236 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:41:59.236 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:41:59.236 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:41:59.236 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:41:59.236 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:41:59.236 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:41:59.236 element at address: 0x200003efba40 with size: 1.008118 MiB 00:41:59.236 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:41:59.236 element at address: 0x200000cff180 with size: 1.000488 MiB 00:41:59.236 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60272 00:41:59.236 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:41:59.236 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60272 00:41:59.236 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:41:59.236 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60272 00:41:59.236 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:41:59.236 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60272 00:41:59.236 element at address: 0x20000087f740 with size: 0.500488 MiB 00:41:59.236 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60272 00:41:59.236 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:41:59.236 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60272 00:41:59.236 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:41:59.236 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:41:59.236 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:41:59.236 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:41:59.236 element at address: 0x20001987c540 with size: 0.250488 MiB 00:41:59.236 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:41:59.236 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:41:59.236 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60272 00:41:59.236 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:41:59.236 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60272 00:41:59.236 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:41:59.236 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:41:59.236 element at address: 0x200028265bc0 with size: 0.023743 MiB 00:41:59.236 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:41:59.236 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:41:59.236 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60272 00:41:59.236 element at address: 0x20002826bd00 with size: 0.002441 MiB 00:41:59.236 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:41:59.236 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:41:59.236 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60272 00:41:59.236 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:41:59.236 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60272 00:41:59.236 element at address: 0x20000085a840 with size: 0.000305 MiB 00:41:59.236 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60272 00:41:59.236 element at address: 0x20002826c7c0 with size: 0.000305 MiB 00:41:59.236 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:41:59.236 09:53:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:41:59.236 09:53:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60272 00:41:59.236 09:53:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60272 ']' 00:41:59.236 09:53:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60272 00:41:59.236 09:53:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:41:59.236 09:53:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:59.236 09:53:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60272 00:41:59.236 09:53:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:59.236 09:53:06 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:59.236 09:53:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60272' 00:41:59.236 killing process with pid 60272 00:41:59.236 09:53:06 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60272 00:41:59.236 09:53:06 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60272 00:41:59.802 00:41:59.802 real 0m1.919s 00:41:59.802 user 0m1.848s 00:41:59.802 sys 0m0.578s 00:41:59.802 09:53:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:59.802 09:53:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:41:59.802 ************************************ 00:41:59.802 END TEST dpdk_mem_utility 00:41:59.802 ************************************ 00:41:59.802 09:53:06 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:41:59.802 09:53:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:59.802 09:53:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:59.802 09:53:06 -- common/autotest_common.sh@10 -- # set +x 00:41:59.802 ************************************ 00:41:59.802 START TEST event 00:41:59.802 ************************************ 00:41:59.802 09:53:06 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:42:00.060 * Looking for test storage... 00:42:00.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:42:00.060 09:53:06 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:00.060 09:53:06 event -- common/autotest_common.sh@1711 -- # lcov --version 00:42:00.060 09:53:06 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:00.060 09:53:06 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:00.060 09:53:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:00.060 09:53:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:00.060 09:53:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:00.060 09:53:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:42:00.061 09:53:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:42:00.061 09:53:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:42:00.061 09:53:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:42:00.061 09:53:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:42:00.061 09:53:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:42:00.061 09:53:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:42:00.061 09:53:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:00.061 09:53:06 event -- scripts/common.sh@344 -- # case "$op" in 00:42:00.061 09:53:06 event -- scripts/common.sh@345 -- # : 1 00:42:00.061 09:53:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:00.061 09:53:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:00.061 09:53:06 event -- scripts/common.sh@365 -- # decimal 1 00:42:00.061 09:53:06 event -- scripts/common.sh@353 -- # local d=1 00:42:00.061 09:53:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:00.061 09:53:06 event -- scripts/common.sh@355 -- # echo 1 00:42:00.061 09:53:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:42:00.061 09:53:06 event -- scripts/common.sh@366 -- # decimal 2 00:42:00.061 09:53:06 event -- scripts/common.sh@353 -- # local d=2 00:42:00.061 09:53:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:00.061 09:53:06 event -- scripts/common.sh@355 -- # echo 2 00:42:00.061 09:53:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:42:00.061 09:53:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:00.061 09:53:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:00.061 09:53:06 event -- scripts/common.sh@368 -- # return 0 00:42:00.061 09:53:06 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:00.061 09:53:06 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:00.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.061 --rc genhtml_branch_coverage=1 00:42:00.061 --rc genhtml_function_coverage=1 00:42:00.061 --rc genhtml_legend=1 00:42:00.061 --rc geninfo_all_blocks=1 00:42:00.061 --rc geninfo_unexecuted_blocks=1 00:42:00.061 00:42:00.061 ' 00:42:00.061 09:53:06 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:00.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.061 --rc genhtml_branch_coverage=1 00:42:00.061 --rc genhtml_function_coverage=1 00:42:00.061 --rc genhtml_legend=1 00:42:00.061 --rc geninfo_all_blocks=1 00:42:00.061 --rc geninfo_unexecuted_blocks=1 00:42:00.061 00:42:00.061 ' 00:42:00.061 09:53:06 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:00.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.061 --rc genhtml_branch_coverage=1 00:42:00.061 --rc genhtml_function_coverage=1 00:42:00.061 --rc genhtml_legend=1 00:42:00.061 --rc geninfo_all_blocks=1 00:42:00.061 --rc geninfo_unexecuted_blocks=1 00:42:00.061 00:42:00.061 ' 00:42:00.061 09:53:06 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:00.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.061 --rc genhtml_branch_coverage=1 00:42:00.061 --rc genhtml_function_coverage=1 00:42:00.061 --rc genhtml_legend=1 00:42:00.061 --rc geninfo_all_blocks=1 00:42:00.061 --rc geninfo_unexecuted_blocks=1 00:42:00.061 00:42:00.061 ' 00:42:00.061 09:53:06 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:42:00.061 09:53:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:42:00.061 09:53:06 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:42:00.061 09:53:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:42:00.061 09:53:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:00.061 09:53:06 event -- common/autotest_common.sh@10 -- # set +x 00:42:00.061 ************************************ 00:42:00.061 START TEST event_perf 00:42:00.061 ************************************ 00:42:00.061 09:53:06 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:42:00.061 Running I/O for 1 seconds...[2024-12-09 09:53:07.001093] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:00.061 [2024-12-09 09:53:07.001188] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60375 ] 00:42:00.320 [2024-12-09 09:53:07.154651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:00.320 [2024-12-09 09:53:07.231105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:00.320 [2024-12-09 09:53:07.231265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:00.320 [2024-12-09 09:53:07.231450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:00.320 [2024-12-09 09:53:07.231453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:01.256 Running I/O for 1 seconds... 00:42:01.256 lcore 0: 84819 00:42:01.256 lcore 1: 84822 00:42:01.256 lcore 2: 84825 00:42:01.256 lcore 3: 84822 00:42:01.516 done. 00:42:01.516 00:42:01.516 real 0m1.329s 00:42:01.516 user 0m4.132s 00:42:01.516 sys 0m0.068s 00:42:01.516 ************************************ 00:42:01.516 END TEST event_perf 00:42:01.516 ************************************ 00:42:01.516 09:53:08 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:01.516 09:53:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:42:01.516 09:53:08 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:42:01.516 09:53:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:01.516 09:53:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:01.516 09:53:08 event -- common/autotest_common.sh@10 -- # set +x 00:42:01.516 ************************************ 00:42:01.516 START TEST event_reactor 00:42:01.516 ************************************ 00:42:01.516 09:53:08 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:42:01.516 [2024-12-09 09:53:08.406692] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:01.516 [2024-12-09 09:53:08.406808] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60414 ] 00:42:01.516 [2024-12-09 09:53:08.556834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:01.776 [2024-12-09 09:53:08.636471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:02.714 test_start 00:42:02.714 oneshot 00:42:02.714 tick 100 00:42:02.714 tick 100 00:42:02.714 tick 250 00:42:02.714 tick 100 00:42:02.714 tick 100 00:42:02.714 tick 100 00:42:02.714 tick 250 00:42:02.714 tick 500 00:42:02.714 tick 100 00:42:02.714 tick 100 00:42:02.714 tick 250 00:42:02.714 tick 100 00:42:02.714 tick 100 00:42:02.714 test_end 00:42:02.714 00:42:02.714 real 0m1.324s 00:42:02.714 user 0m1.158s 00:42:02.714 sys 0m0.059s 00:42:02.714 09:53:09 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:02.714 ************************************ 00:42:02.714 END TEST event_reactor 00:42:02.714 ************************************ 00:42:02.714 09:53:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:42:02.973 09:53:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:42:02.973 09:53:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:02.973 09:53:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:02.973 09:53:09 event -- common/autotest_common.sh@10 -- # set +x 00:42:02.973 ************************************ 00:42:02.973 START TEST event_reactor_perf 00:42:02.973 ************************************ 00:42:02.973 09:53:09 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:42:02.973 [2024-12-09 09:53:09.804221] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:02.974 [2024-12-09 09:53:09.804331] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60449 ] 00:42:02.974 [2024-12-09 09:53:09.957401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:03.232 [2024-12-09 09:53:10.041206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:04.168 test_start 00:42:04.169 test_end 00:42:04.169 Performance: 431262 events per second 00:42:04.169 00:42:04.169 real 0m1.337s 00:42:04.169 user 0m1.171s 00:42:04.169 sys 0m0.059s 00:42:04.169 ************************************ 00:42:04.169 END TEST event_reactor_perf 00:42:04.169 ************************************ 00:42:04.169 09:53:11 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:04.169 09:53:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:42:04.169 09:53:11 event -- event/event.sh@49 -- # uname -s 00:42:04.169 09:53:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:42:04.169 09:53:11 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:42:04.169 09:53:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:04.169 09:53:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:04.169 09:53:11 event -- common/autotest_common.sh@10 -- # set +x 00:42:04.169 ************************************ 00:42:04.169 START TEST event_scheduler 00:42:04.169 ************************************ 00:42:04.169 09:53:11 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:42:04.428 * Looking for test storage... 00:42:04.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:42:04.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:04.428 09:53:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:04.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.428 --rc genhtml_branch_coverage=1 00:42:04.428 --rc genhtml_function_coverage=1 00:42:04.428 --rc genhtml_legend=1 00:42:04.428 --rc geninfo_all_blocks=1 00:42:04.428 --rc geninfo_unexecuted_blocks=1 00:42:04.428 00:42:04.428 ' 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:04.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.428 --rc genhtml_branch_coverage=1 00:42:04.428 --rc genhtml_function_coverage=1 00:42:04.428 --rc genhtml_legend=1 00:42:04.428 --rc geninfo_all_blocks=1 00:42:04.428 --rc geninfo_unexecuted_blocks=1 00:42:04.428 00:42:04.428 ' 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:04.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.428 --rc genhtml_branch_coverage=1 00:42:04.428 --rc genhtml_function_coverage=1 00:42:04.428 --rc genhtml_legend=1 00:42:04.428 --rc geninfo_all_blocks=1 00:42:04.428 --rc geninfo_unexecuted_blocks=1 00:42:04.428 00:42:04.428 ' 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:04.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.428 --rc genhtml_branch_coverage=1 00:42:04.428 --rc genhtml_function_coverage=1 00:42:04.428 --rc genhtml_legend=1 00:42:04.428 --rc geninfo_all_blocks=1 00:42:04.428 --rc geninfo_unexecuted_blocks=1 00:42:04.428 00:42:04.428 ' 00:42:04.428 09:53:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:42:04.428 09:53:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60519 00:42:04.428 09:53:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:42:04.428 09:53:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60519 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60519 ']' 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:04.428 09:53:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:42:04.428 09:53:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:42:04.687 [2024-12-09 09:53:11.479883] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:04.687 [2024-12-09 09:53:11.480375] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60519 ] 00:42:04.687 [2024-12-09 09:53:11.613457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:04.687 [2024-12-09 09:53:11.680707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:04.687 [2024-12-09 09:53:11.680851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:04.687 [2024-12-09 09:53:11.681124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:04.687 [2024-12-09 09:53:11.681130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:05.690 09:53:12 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:05.690 09:53:12 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:42:05.690 09:53:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:42:05.690 09:53:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.690 09:53:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:42:05.690 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:42:05.690 POWER: Cannot set governor of lcore 0 to userspace 00:42:05.690 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:42:05.690 POWER: Cannot set governor of lcore 0 to performance 00:42:05.690 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:42:05.690 POWER: Cannot set governor of lcore 0 to userspace 00:42:05.690 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:42:05.690 POWER: Cannot set governor of lcore 0 to userspace 00:42:05.690 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:42:05.690 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:42:05.690 POWER: Unable to set Power Management Environment for lcore 0 00:42:05.690 [2024-12-09 09:53:12.385826] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:42:05.690 [2024-12-09 09:53:12.385863] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:42:05.690 [2024-12-09 09:53:12.385890] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:42:05.690 [2024-12-09 09:53:12.385931] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:42:05.691 [2024-12-09 09:53:12.385962] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:42:05.691 [2024-12-09 09:53:12.385986] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:42:05.691 09:53:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:42:05.691 09:53:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 [2024-12-09 09:53:12.467823] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:42:05.691 09:53:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:42:05.691 09:53:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:05.691 09:53:12 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 ************************************ 00:42:05.691 START TEST scheduler_create_thread 00:42:05.691 ************************************ 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 2 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 3 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 4 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 5 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 6 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 7 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 8 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 9 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 10 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.691 09:53:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:07.068 09:53:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.068 09:53:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:42:07.068 09:53:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:42:07.068 09:53:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.068 09:53:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:08.446 ************************************ 00:42:08.446 END TEST scheduler_create_thread 00:42:08.446 ************************************ 00:42:08.446 09:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.446 00:42:08.446 real 0m2.612s 00:42:08.446 user 0m0.031s 00:42:08.446 sys 0m0.009s 00:42:08.446 09:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:08.446 09:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:42:08.446 09:53:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:42:08.446 09:53:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60519 00:42:08.446 09:53:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60519 ']' 00:42:08.446 09:53:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60519 00:42:08.446 09:53:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:42:08.446 09:53:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:08.446 09:53:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60519 00:42:08.446 killing process with pid 60519 00:42:08.446 09:53:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:42:08.446 09:53:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:42:08.446 09:53:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60519' 00:42:08.446 09:53:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60519 00:42:08.446 09:53:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60519 00:42:08.704 [2024-12-09 09:53:15.573718] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:42:08.961 00:42:08.961 real 0m4.564s 00:42:08.961 user 0m8.437s 00:42:08.961 sys 0m0.421s 00:42:08.961 09:53:15 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:08.961 ************************************ 00:42:08.961 END TEST event_scheduler 00:42:08.961 ************************************ 00:42:08.961 09:53:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:42:08.961 09:53:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:42:08.961 09:53:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:42:08.961 09:53:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:08.961 09:53:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:08.961 09:53:15 event -- common/autotest_common.sh@10 -- # set +x 00:42:08.961 ************************************ 00:42:08.961 START TEST app_repeat 00:42:08.961 ************************************ 00:42:08.961 09:53:15 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60632 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60632' 00:42:08.961 Process app_repeat pid: 60632 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:42:08.961 spdk_app_start Round 0 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:42:08.961 09:53:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60632 /var/tmp/spdk-nbd.sock 00:42:08.961 09:53:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60632 ']' 00:42:08.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:42:08.961 09:53:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:42:08.961 09:53:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:08.961 09:53:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:42:08.961 09:53:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:08.961 09:53:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:42:08.961 [2024-12-09 09:53:15.876431] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:08.961 [2024-12-09 09:53:15.876667] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60632 ] 00:42:09.219 [2024-12-09 09:53:16.013743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:09.219 [2024-12-09 09:53:16.097979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:09.219 [2024-12-09 09:53:16.097985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:10.155 09:53:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:10.155 09:53:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:42:10.155 09:53:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:42:10.155 Malloc0 00:42:10.155 09:53:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:42:10.414 Malloc1 00:42:10.414 09:53:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:10.414 09:53:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:42:10.672 /dev/nbd0 00:42:10.672 09:53:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:10.672 09:53:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:10.672 09:53:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:10.672 09:53:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:42:10.672 09:53:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:10.672 09:53:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:10.673 09:53:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:10.673 09:53:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:42:10.673 09:53:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:10.673 09:53:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:10.673 09:53:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:42:10.673 1+0 records in 00:42:10.673 1+0 records out 00:42:10.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421095 s, 9.7 MB/s 00:42:10.673 09:53:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:42:10.673 09:53:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:42:10.673 09:53:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:42:10.673 09:53:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:10.673 09:53:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:42:10.673 09:53:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:10.673 09:53:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:10.673 09:53:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:42:10.931 /dev/nbd1 00:42:10.931 09:53:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:10.931 09:53:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:42:10.931 1+0 records in 00:42:10.931 1+0 records out 00:42:10.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393261 s, 10.4 MB/s 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:10.931 09:53:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:42:10.931 09:53:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:10.931 09:53:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:10.931 09:53:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:10.931 09:53:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:10.931 09:53:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:11.190 09:53:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:42:11.190 { 00:42:11.190 "bdev_name": "Malloc0", 00:42:11.190 "nbd_device": "/dev/nbd0" 00:42:11.190 }, 00:42:11.190 { 00:42:11.190 "bdev_name": "Malloc1", 00:42:11.190 "nbd_device": "/dev/nbd1" 00:42:11.190 } 00:42:11.190 ]' 00:42:11.190 09:53:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:42:11.190 { 00:42:11.190 "bdev_name": "Malloc0", 00:42:11.190 "nbd_device": "/dev/nbd0" 00:42:11.190 }, 00:42:11.190 { 00:42:11.190 "bdev_name": "Malloc1", 00:42:11.190 "nbd_device": "/dev/nbd1" 00:42:11.190 } 00:42:11.190 ]' 00:42:11.190 09:53:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:11.190 09:53:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:42:11.190 /dev/nbd1' 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:42:11.191 /dev/nbd1' 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:42:11.191 256+0 records in 00:42:11.191 256+0 records out 00:42:11.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050185 s, 209 MB/s 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:11.191 09:53:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:42:11.450 256+0 records in 00:42:11.450 256+0 records out 00:42:11.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270426 s, 38.8 MB/s 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:42:11.450 256+0 records in 00:42:11.450 256+0 records out 00:42:11.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259913 s, 40.3 MB/s 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:11.450 09:53:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:11.772 09:53:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:11.772 09:53:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:11.772 09:53:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:11.772 09:53:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:11.772 09:53:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:11.772 09:53:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:11.772 09:53:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:42:11.772 09:53:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:42:11.772 09:53:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:11.772 09:53:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:42:11.772 09:53:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:12.047 09:53:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:12.047 09:53:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:12.047 09:53:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:12.047 09:53:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:12.047 09:53:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:12.047 09:53:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:42:12.047 09:53:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:42:12.047 09:53:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:12.047 09:53:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:12.047 09:53:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:12.047 09:53:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:12.047 09:53:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:12.047 09:53:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:12.047 09:53:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:12.047 09:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:42:12.047 09:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:12.047 09:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:42:12.047 09:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:42:12.047 09:53:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:42:12.047 09:53:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:42:12.047 09:53:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:42:12.047 09:53:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:42:12.047 09:53:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:42:12.305 09:53:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:42:12.564 [2024-12-09 09:53:19.461173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:12.564 [2024-12-09 09:53:19.517039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:12.564 [2024-12-09 09:53:19.517047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:12.564 [2024-12-09 09:53:19.561286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:42:12.564 [2024-12-09 09:53:19.561346] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:42:15.850 spdk_app_start Round 1 00:42:15.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:42:15.850 09:53:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:42:15.850 09:53:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:42:15.850 09:53:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60632 /var/tmp/spdk-nbd.sock 00:42:15.850 09:53:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60632 ']' 00:42:15.850 09:53:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:42:15.850 09:53:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:15.850 09:53:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:42:15.850 09:53:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:15.850 09:53:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:42:15.850 09:53:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:15.850 09:53:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:42:15.850 09:53:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:42:15.850 Malloc0 00:42:15.850 09:53:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:42:16.108 Malloc1 00:42:16.108 09:53:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:42:16.108 09:53:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:16.108 09:53:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:42:16.108 09:53:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:42:16.108 09:53:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:16.108 09:53:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:42:16.108 09:53:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:42:16.108 09:53:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:16.108 09:53:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:42:16.108 09:53:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:16.109 09:53:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:16.109 09:53:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:16.109 09:53:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:42:16.109 09:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:16.109 09:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:16.109 09:53:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:42:16.368 /dev/nbd0 00:42:16.368 09:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:16.368 09:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:42:16.368 1+0 records in 00:42:16.368 1+0 records out 00:42:16.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027802 s, 14.7 MB/s 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:16.368 09:53:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:42:16.368 09:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:16.368 09:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:16.368 09:53:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:42:16.627 /dev/nbd1 00:42:16.627 09:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:16.627 09:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:42:16.627 1+0 records in 00:42:16.627 1+0 records out 00:42:16.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340476 s, 12.0 MB/s 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:16.627 09:53:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:42:16.627 09:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:16.627 09:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:16.627 09:53:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:16.628 09:53:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:16.628 09:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:16.887 09:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:42:16.887 { 00:42:16.887 "bdev_name": "Malloc0", 00:42:16.887 "nbd_device": "/dev/nbd0" 00:42:16.887 }, 00:42:16.887 { 00:42:16.887 "bdev_name": "Malloc1", 00:42:16.887 "nbd_device": "/dev/nbd1" 00:42:16.887 } 00:42:16.887 ]' 00:42:16.887 09:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:42:16.887 { 00:42:16.887 "bdev_name": "Malloc0", 00:42:16.887 "nbd_device": "/dev/nbd0" 00:42:16.887 }, 00:42:16.887 { 00:42:16.887 "bdev_name": "Malloc1", 00:42:16.887 "nbd_device": "/dev/nbd1" 00:42:16.887 } 00:42:16.887 ]' 00:42:16.887 09:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:16.887 09:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:42:16.887 /dev/nbd1' 00:42:16.887 09:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:42:16.887 /dev/nbd1' 00:42:16.887 09:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:16.887 09:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:42:16.887 09:53:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:42:16.887 09:53:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:42:17.146 09:53:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:42:17.146 09:53:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:42:17.146 09:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:17.146 09:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:17.146 09:53:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:42:17.146 09:53:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:42:17.146 09:53:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:42:17.146 09:53:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:42:17.146 256+0 records in 00:42:17.146 256+0 records out 00:42:17.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00666653 s, 157 MB/s 00:42:17.146 09:53:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:17.146 09:53:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:42:17.146 256+0 records in 00:42:17.146 256+0 records out 00:42:17.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226543 s, 46.3 MB/s 00:42:17.146 09:53:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:17.146 09:53:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:42:17.146 256+0 records in 00:42:17.146 256+0 records out 00:42:17.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281065 s, 37.3 MB/s 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:17.146 09:53:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:17.406 09:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:17.406 09:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:17.406 09:53:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:17.406 09:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:17.406 09:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:17.406 09:53:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:17.406 09:53:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:42:17.406 09:53:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:42:17.406 09:53:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:17.406 09:53:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:42:17.666 09:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:17.666 09:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:17.666 09:53:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:17.666 09:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:17.666 09:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:17.666 09:53:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:17.666 09:53:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:42:17.666 09:53:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:42:17.666 09:53:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:17.666 09:53:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:17.666 09:53:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:17.925 09:53:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:17.925 09:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:17.925 09:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:17.925 09:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:17.925 09:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:42:17.925 09:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:17.925 09:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:42:17.925 09:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:42:17.925 09:53:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:42:17.925 09:53:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:42:17.925 09:53:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:42:17.925 09:53:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:42:17.925 09:53:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:42:18.183 09:53:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:42:18.183 [2024-12-09 09:53:25.217678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:18.442 [2024-12-09 09:53:25.273941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:18.442 [2024-12-09 09:53:25.273945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:18.442 [2024-12-09 09:53:25.317684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:42:18.442 [2024-12-09 09:53:25.317747] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:42:21.766 spdk_app_start Round 2 00:42:21.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:42:21.766 09:53:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:42:21.767 09:53:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:42:21.767 09:53:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60632 /var/tmp/spdk-nbd.sock 00:42:21.767 09:53:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60632 ']' 00:42:21.767 09:53:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:42:21.767 09:53:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:21.767 09:53:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:42:21.767 09:53:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:21.767 09:53:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:42:21.767 09:53:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:21.767 09:53:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:42:21.767 09:53:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:42:21.767 Malloc0 00:42:21.767 09:53:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:42:22.026 Malloc1 00:42:22.026 09:53:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:22.026 09:53:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:42:22.286 /dev/nbd0 00:42:22.286 09:53:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:22.286 09:53:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:42:22.286 1+0 records in 00:42:22.286 1+0 records out 00:42:22.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357548 s, 11.5 MB/s 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:22.286 09:53:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:42:22.286 09:53:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:22.286 09:53:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:22.286 09:53:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:42:22.545 /dev/nbd1 00:42:22.545 09:53:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:22.545 09:53:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:42:22.545 1+0 records in 00:42:22.545 1+0 records out 00:42:22.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339757 s, 12.1 MB/s 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:22.545 09:53:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:42:22.545 09:53:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:22.545 09:53:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:22.545 09:53:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:22.545 09:53:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:22.545 09:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:22.804 09:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:42:22.804 { 00:42:22.804 "bdev_name": "Malloc0", 00:42:22.804 "nbd_device": "/dev/nbd0" 00:42:22.804 }, 00:42:22.805 { 00:42:22.805 "bdev_name": "Malloc1", 00:42:22.805 "nbd_device": "/dev/nbd1" 00:42:22.805 } 00:42:22.805 ]' 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:42:22.805 { 00:42:22.805 "bdev_name": "Malloc0", 00:42:22.805 "nbd_device": "/dev/nbd0" 00:42:22.805 }, 00:42:22.805 { 00:42:22.805 "bdev_name": "Malloc1", 00:42:22.805 "nbd_device": "/dev/nbd1" 00:42:22.805 } 00:42:22.805 ]' 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:42:22.805 /dev/nbd1' 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:42:22.805 /dev/nbd1' 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:42:22.805 256+0 records in 00:42:22.805 256+0 records out 00:42:22.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136813 s, 76.6 MB/s 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:42:22.805 256+0 records in 00:42:22.805 256+0 records out 00:42:22.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213335 s, 49.2 MB/s 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:42:22.805 256+0 records in 00:42:22.805 256+0 records out 00:42:22.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275475 s, 38.1 MB/s 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:22.805 09:53:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:23.065 09:53:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:23.065 09:53:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:23.065 09:53:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:23.065 09:53:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:23.065 09:53:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:23.065 09:53:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:23.065 09:53:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:42:23.065 09:53:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:42:23.065 09:53:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:23.065 09:53:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:42:23.324 09:53:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:23.324 09:53:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:23.324 09:53:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:23.324 09:53:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:23.324 09:53:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:23.324 09:53:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:23.324 09:53:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:42:23.324 09:53:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:42:23.324 09:53:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:23.324 09:53:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:23.324 09:53:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:23.584 09:53:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:23.584 09:53:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:23.584 09:53:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:23.584 09:53:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:23.584 09:53:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:23.584 09:53:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:42:23.584 09:53:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:42:23.584 09:53:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:42:23.584 09:53:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:42:23.584 09:53:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:42:23.584 09:53:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:42:23.584 09:53:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:42:23.584 09:53:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:42:23.844 09:53:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:42:24.104 [2024-12-09 09:53:30.935593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:24.104 [2024-12-09 09:53:30.989785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:24.104 [2024-12-09 09:53:30.989789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:24.104 [2024-12-09 09:53:31.032744] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:42:24.104 [2024-12-09 09:53:31.032816] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:42:27.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:42:27.435 09:53:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60632 /var/tmp/spdk-nbd.sock 00:42:27.435 09:53:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60632 ']' 00:42:27.435 09:53:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:42:27.435 09:53:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:27.435 09:53:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:42:27.435 09:53:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:27.435 09:53:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:42:27.435 09:53:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:27.435 09:53:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:42:27.435 09:53:34 event.app_repeat -- event/event.sh@39 -- # killprocess 60632 00:42:27.435 09:53:34 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60632 ']' 00:42:27.435 09:53:34 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60632 00:42:27.435 09:53:34 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:42:27.435 09:53:34 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:27.435 09:53:34 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60632 00:42:27.435 killing process with pid 60632 00:42:27.435 09:53:34 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:27.435 09:53:34 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:27.435 09:53:34 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60632' 00:42:27.435 09:53:34 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60632 00:42:27.435 09:53:34 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60632 00:42:27.435 spdk_app_start is called in Round 0. 00:42:27.436 Shutdown signal received, stop current app iteration 00:42:27.436 Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 reinitialization... 00:42:27.436 spdk_app_start is called in Round 1. 00:42:27.436 Shutdown signal received, stop current app iteration 00:42:27.436 Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 reinitialization... 00:42:27.436 spdk_app_start is called in Round 2. 00:42:27.436 Shutdown signal received, stop current app iteration 00:42:27.436 Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 reinitialization... 00:42:27.436 spdk_app_start is called in Round 3. 00:42:27.436 Shutdown signal received, stop current app iteration 00:42:27.436 09:53:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:42:27.436 09:53:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:42:27.436 00:42:27.436 real 0m18.441s 00:42:27.436 user 0m41.051s 00:42:27.436 sys 0m3.116s 00:42:27.436 09:53:34 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:27.436 09:53:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:42:27.436 ************************************ 00:42:27.436 END TEST app_repeat 00:42:27.436 ************************************ 00:42:27.436 09:53:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:42:27.436 09:53:34 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:42:27.436 09:53:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:27.436 09:53:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:27.436 09:53:34 event -- common/autotest_common.sh@10 -- # set +x 00:42:27.436 ************************************ 00:42:27.436 START TEST cpu_locks 00:42:27.436 ************************************ 00:42:27.436 09:53:34 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:42:27.436 * Looking for test storage... 00:42:27.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:42:27.436 09:53:34 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:27.436 09:53:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:42:27.436 09:53:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:27.696 09:53:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:27.696 09:53:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:42:27.696 09:53:34 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:27.696 09:53:34 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:27.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:27.696 --rc genhtml_branch_coverage=1 00:42:27.696 --rc genhtml_function_coverage=1 00:42:27.696 --rc genhtml_legend=1 00:42:27.696 --rc geninfo_all_blocks=1 00:42:27.696 --rc geninfo_unexecuted_blocks=1 00:42:27.696 00:42:27.696 ' 00:42:27.696 09:53:34 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:27.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:27.696 --rc genhtml_branch_coverage=1 00:42:27.696 --rc genhtml_function_coverage=1 00:42:27.696 --rc genhtml_legend=1 00:42:27.696 --rc geninfo_all_blocks=1 00:42:27.696 --rc geninfo_unexecuted_blocks=1 00:42:27.697 00:42:27.697 ' 00:42:27.697 09:53:34 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:27.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:27.697 --rc genhtml_branch_coverage=1 00:42:27.697 --rc genhtml_function_coverage=1 00:42:27.697 --rc genhtml_legend=1 00:42:27.697 --rc geninfo_all_blocks=1 00:42:27.697 --rc geninfo_unexecuted_blocks=1 00:42:27.697 00:42:27.697 ' 00:42:27.697 09:53:34 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:27.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:27.697 --rc genhtml_branch_coverage=1 00:42:27.697 --rc genhtml_function_coverage=1 00:42:27.697 --rc genhtml_legend=1 00:42:27.697 --rc geninfo_all_blocks=1 00:42:27.697 --rc geninfo_unexecuted_blocks=1 00:42:27.697 00:42:27.697 ' 00:42:27.697 09:53:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:42:27.697 09:53:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:42:27.697 09:53:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:42:27.697 09:53:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:42:27.697 09:53:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:27.697 09:53:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:27.697 09:53:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:27.697 ************************************ 00:42:27.697 START TEST default_locks 00:42:27.697 ************************************ 00:42:27.697 09:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:42:27.697 09:53:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61259 00:42:27.697 09:53:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:42:27.697 09:53:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61259 00:42:27.697 09:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61259 ']' 00:42:27.697 09:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:27.697 09:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:27.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:27.697 09:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:27.697 09:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:27.697 09:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:42:27.697 [2024-12-09 09:53:34.610796] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:27.697 [2024-12-09 09:53:34.610904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61259 ] 00:42:27.956 [2024-12-09 09:53:34.738459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:27.956 [2024-12-09 09:53:34.792911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61259 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61259 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61259 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 61259 ']' 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 61259 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61259 00:42:28.902 killing process with pid 61259 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61259' 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 61259 00:42:28.902 09:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 61259 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61259 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61259 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 61259 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61259 ']' 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:29.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:42:29.471 ERROR: process (pid: 61259) is no longer running 00:42:29.471 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61259) - No such process 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:42:29.471 00:42:29.471 real 0m1.700s 00:42:29.471 user 0m1.848s 00:42:29.471 sys 0m0.493s 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:29.471 09:53:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:42:29.471 ************************************ 00:42:29.471 END TEST default_locks 00:42:29.471 ************************************ 00:42:29.471 09:53:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:42:29.471 09:53:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:29.471 09:53:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:29.471 09:53:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:29.471 ************************************ 00:42:29.471 START TEST default_locks_via_rpc 00:42:29.471 ************************************ 00:42:29.471 09:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:42:29.471 09:53:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61323 00:42:29.471 09:53:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61323 00:42:29.471 09:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61323 ']' 00:42:29.471 09:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:29.471 09:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:29.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:29.471 09:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:29.471 09:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:29.471 09:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:29.471 09:53:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:42:29.471 [2024-12-09 09:53:36.370210] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:29.471 [2024-12-09 09:53:36.370278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61323 ] 00:42:29.732 [2024-12-09 09:53:36.519371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:29.732 [2024-12-09 09:53:36.574264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61323 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61323 00:42:30.302 09:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:42:30.562 09:53:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61323 00:42:30.562 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 61323 ']' 00:42:30.562 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 61323 00:42:30.562 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:42:30.562 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:30.562 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61323 00:42:30.562 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:30.562 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:30.562 killing process with pid 61323 00:42:30.562 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61323' 00:42:30.562 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 61323 00:42:30.562 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 61323 00:42:31.135 00:42:31.135 real 0m1.610s 00:42:31.135 user 0m1.685s 00:42:31.135 sys 0m0.499s 00:42:31.135 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:31.135 09:53:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:31.135 ************************************ 00:42:31.135 END TEST default_locks_via_rpc 00:42:31.135 ************************************ 00:42:31.135 09:53:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:42:31.135 09:53:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:31.135 09:53:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:31.135 09:53:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:31.135 ************************************ 00:42:31.135 START TEST non_locking_app_on_locked_coremask 00:42:31.135 ************************************ 00:42:31.135 09:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:42:31.135 09:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:42:31.135 09:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61385 00:42:31.135 09:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61385 /var/tmp/spdk.sock 00:42:31.135 09:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61385 ']' 00:42:31.135 09:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:31.135 09:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:31.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:31.135 09:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:31.135 09:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:31.135 09:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:31.135 [2024-12-09 09:53:38.041275] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:31.135 [2024-12-09 09:53:38.041363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61385 ] 00:42:31.394 [2024-12-09 09:53:38.195738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:31.394 [2024-12-09 09:53:38.250735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:31.963 09:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:31.963 09:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:42:31.963 09:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61409 00:42:31.963 09:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:42:31.963 09:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61409 /var/tmp/spdk2.sock 00:42:31.963 09:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61409 ']' 00:42:31.963 09:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:42:31.963 09:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:31.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:42:31.963 09:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:42:31.963 09:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:31.963 09:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:32.223 [2024-12-09 09:53:39.009528] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:32.223 [2024-12-09 09:53:39.009589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61409 ] 00:42:32.223 [2024-12-09 09:53:39.161681] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:42:32.223 [2024-12-09 09:53:39.161733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:32.482 [2024-12-09 09:53:39.270999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:33.095 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:33.095 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:42:33.095 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61385 00:42:33.095 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:42:33.095 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61385 00:42:33.663 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61385 00:42:33.663 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61385 ']' 00:42:33.663 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61385 00:42:33.663 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:42:33.663 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:33.663 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61385 00:42:33.663 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:33.663 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:33.663 killing process with pid 61385 00:42:33.663 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61385' 00:42:33.663 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61385 00:42:33.663 09:53:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61385 00:42:34.231 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61409 00:42:34.231 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61409 ']' 00:42:34.231 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61409 00:42:34.231 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:42:34.231 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:34.231 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61409 00:42:34.231 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:34.231 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:34.231 killing process with pid 61409 00:42:34.231 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61409' 00:42:34.231 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61409 00:42:34.231 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61409 00:42:34.491 00:42:34.491 real 0m3.482s 00:42:34.491 user 0m3.909s 00:42:34.491 sys 0m0.928s 00:42:34.491 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:34.491 09:53:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:34.491 ************************************ 00:42:34.491 END TEST non_locking_app_on_locked_coremask 00:42:34.491 ************************************ 00:42:34.491 09:53:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:42:34.491 09:53:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:34.491 09:53:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:34.491 09:53:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:34.491 ************************************ 00:42:34.491 START TEST locking_app_on_unlocked_coremask 00:42:34.491 ************************************ 00:42:34.491 09:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:42:34.491 09:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61490 00:42:34.491 09:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:42:34.491 09:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61490 /var/tmp/spdk.sock 00:42:34.491 09:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61490 ']' 00:42:34.491 09:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:34.491 09:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:34.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:34.491 09:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:34.491 09:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:34.491 09:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:34.750 [2024-12-09 09:53:41.584923] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:34.750 [2024-12-09 09:53:41.584999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61490 ] 00:42:34.750 [2024-12-09 09:53:41.737812] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:42:34.750 [2024-12-09 09:53:41.737885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:35.009 [2024-12-09 09:53:41.797114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:35.577 09:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:35.577 09:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:42:35.577 09:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61519 00:42:35.577 09:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:42:35.578 09:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61519 /var/tmp/spdk2.sock 00:42:35.578 09:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61519 ']' 00:42:35.578 09:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:42:35.578 09:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:35.578 09:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:42:35.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:42:35.578 09:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:35.578 09:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:35.578 [2024-12-09 09:53:42.568662] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:35.578 [2024-12-09 09:53:42.568727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61519 ] 00:42:35.837 [2024-12-09 09:53:42.715305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:35.837 [2024-12-09 09:53:42.821282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:36.774 09:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:36.774 09:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:42:36.774 09:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61519 00:42:36.774 09:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:42:36.774 09:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61519 00:42:37.343 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61490 00:42:37.343 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61490 ']' 00:42:37.343 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61490 00:42:37.343 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:42:37.343 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:37.343 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61490 00:42:37.343 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:37.343 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:37.343 killing process with pid 61490 00:42:37.343 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61490' 00:42:37.343 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61490 00:42:37.343 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61490 00:42:37.912 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61519 00:42:37.912 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61519 ']' 00:42:37.912 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61519 00:42:37.912 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:42:37.912 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:37.912 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61519 00:42:37.912 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:37.912 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:37.912 killing process with pid 61519 00:42:37.912 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61519' 00:42:37.912 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61519 00:42:37.912 09:53:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61519 00:42:38.171 00:42:38.171 real 0m3.685s 00:42:38.171 user 0m4.041s 00:42:38.171 sys 0m1.044s 00:42:38.171 09:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:38.171 09:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:38.171 ************************************ 00:42:38.171 END TEST locking_app_on_unlocked_coremask 00:42:38.171 ************************************ 00:42:38.429 09:53:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:42:38.429 09:53:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:38.429 09:53:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:38.429 09:53:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:38.429 ************************************ 00:42:38.429 START TEST locking_app_on_locked_coremask 00:42:38.429 ************************************ 00:42:38.429 09:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:42:38.429 09:53:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61587 00:42:38.429 09:53:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:42:38.429 09:53:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61587 /var/tmp/spdk.sock 00:42:38.429 09:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61587 ']' 00:42:38.429 09:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:38.429 09:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:38.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:38.429 09:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:38.429 09:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:38.429 09:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:38.429 [2024-12-09 09:53:45.338439] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:38.429 [2024-12-09 09:53:45.338528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61587 ] 00:42:38.689 [2024-12-09 09:53:45.486768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:38.689 [2024-12-09 09:53:45.543792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61615 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61615 /var/tmp/spdk2.sock 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61615 /var/tmp/spdk2.sock 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61615 /var/tmp/spdk2.sock 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61615 ']' 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:39.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:39.258 09:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:39.535 [2024-12-09 09:53:46.323369] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:39.535 [2024-12-09 09:53:46.323454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61615 ] 00:42:39.535 [2024-12-09 09:53:46.469020] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61587 has claimed it. 00:42:39.535 [2024-12-09 09:53:46.469076] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:42:40.137 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61615) - No such process 00:42:40.137 ERROR: process (pid: 61615) is no longer running 00:42:40.137 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:40.137 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:42:40.137 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:42:40.137 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:40.137 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:40.137 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:40.137 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61587 00:42:40.137 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61587 00:42:40.137 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:42:40.396 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61587 00:42:40.396 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61587 ']' 00:42:40.396 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61587 00:42:40.396 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:42:40.396 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:40.396 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61587 00:42:40.396 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:40.396 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:40.396 killing process with pid 61587 00:42:40.396 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61587' 00:42:40.396 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61587 00:42:40.396 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61587 00:42:40.963 00:42:40.963 real 0m2.429s 00:42:40.963 user 0m2.767s 00:42:40.963 sys 0m0.608s 00:42:40.963 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:40.963 09:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:40.963 ************************************ 00:42:40.963 END TEST locking_app_on_locked_coremask 00:42:40.963 ************************************ 00:42:40.963 09:53:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:42:40.963 09:53:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:40.963 09:53:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:40.963 09:53:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:40.963 ************************************ 00:42:40.963 START TEST locking_overlapped_coremask 00:42:40.963 ************************************ 00:42:40.963 09:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:42:40.963 09:53:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61672 00:42:40.963 09:53:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:42:40.963 09:53:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61672 /var/tmp/spdk.sock 00:42:40.963 09:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61672 ']' 00:42:40.963 09:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:40.963 09:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:40.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:40.963 09:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:40.963 09:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:40.963 09:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:40.964 [2024-12-09 09:53:47.827521] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:40.964 [2024-12-09 09:53:47.827594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61672 ] 00:42:40.964 [2024-12-09 09:53:47.976003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:41.222 [2024-12-09 09:53:48.036558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:41.222 [2024-12-09 09:53:48.036748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:41.222 [2024-12-09 09:53:48.036749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61702 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61702 /var/tmp/spdk2.sock 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61702 /var/tmp/spdk2.sock 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61702 /var/tmp/spdk2.sock 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61702 ']' 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:41.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:41.792 09:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:41.792 [2024-12-09 09:53:48.797062] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:41.792 [2024-12-09 09:53:48.797158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61702 ] 00:42:42.051 [2024-12-09 09:53:48.952741] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61672 has claimed it. 00:42:42.051 [2024-12-09 09:53:48.952801] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:42:42.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61702) - No such process 00:42:42.620 ERROR: process (pid: 61702) is no longer running 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61672 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61672 ']' 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61672 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61672 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:42.620 killing process with pid 61672 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61672' 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61672 00:42:42.620 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61672 00:42:42.879 00:42:42.879 real 0m2.069s 00:42:42.879 user 0m5.817s 00:42:42.879 sys 0m0.405s 00:42:42.879 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:42.879 09:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:42:42.879 ************************************ 00:42:42.879 END TEST locking_overlapped_coremask 00:42:42.879 ************************************ 00:42:42.879 09:53:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:42:42.879 09:53:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:42.879 09:53:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:42.879 09:53:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:42.879 ************************************ 00:42:42.879 START TEST locking_overlapped_coremask_via_rpc 00:42:42.879 ************************************ 00:42:42.879 09:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:42:42.879 09:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61748 00:42:42.879 09:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:42:42.879 09:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61748 /var/tmp/spdk.sock 00:42:42.879 09:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61748 ']' 00:42:42.879 09:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:42.879 09:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:42.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:42.879 09:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:42.879 09:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:42.879 09:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:43.139 [2024-12-09 09:53:49.950626] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:43.139 [2024-12-09 09:53:49.951073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61748 ] 00:42:43.139 [2024-12-09 09:53:50.103473] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:42:43.139 [2024-12-09 09:53:50.103525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:43.139 [2024-12-09 09:53:50.168404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:43.139 [2024-12-09 09:53:50.168531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:43.139 [2024-12-09 09:53:50.168532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:44.073 09:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:44.073 09:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:42:44.073 09:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61778 00:42:44.073 09:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:42:44.073 09:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61778 /var/tmp/spdk2.sock 00:42:44.073 09:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61778 ']' 00:42:44.073 09:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:42:44.073 09:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:44.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:42:44.073 09:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:42:44.073 09:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:44.073 09:53:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:44.073 [2024-12-09 09:53:50.941249] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:44.073 [2024-12-09 09:53:50.941347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61778 ] 00:42:44.073 [2024-12-09 09:53:51.095760] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:42:44.073 [2024-12-09 09:53:51.095820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:44.331 [2024-12-09 09:53:51.215223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:44.331 [2024-12-09 09:53:51.215314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:44.331 [2024-12-09 09:53:51.215321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:44.897 [2024-12-09 09:53:51.867611] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61748 has claimed it. 00:42:44.897 2024/12/09 09:53:51 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:42:44.897 request: 00:42:44.897 { 00:42:44.897 "method": "framework_enable_cpumask_locks", 00:42:44.897 "params": {} 00:42:44.897 } 00:42:44.897 Got JSON-RPC error response 00:42:44.897 GoRPCClient: error on JSON-RPC call 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61748 /var/tmp/spdk.sock 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61748 ']' 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:44.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:44.897 09:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:45.156 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:45.156 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:42:45.156 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61778 /var/tmp/spdk2.sock 00:42:45.156 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61778 ']' 00:42:45.156 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:42:45.156 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:45.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:42:45.156 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:42:45.156 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:45.157 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:45.417 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:45.417 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:42:45.417 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:42:45.417 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:42:45.417 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:42:45.417 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:42:45.417 00:42:45.417 real 0m2.526s 00:42:45.417 user 0m1.259s 00:42:45.417 sys 0m0.201s 00:42:45.417 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:45.417 09:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:45.417 ************************************ 00:42:45.417 END TEST locking_overlapped_coremask_via_rpc 00:42:45.417 ************************************ 00:42:45.677 09:53:52 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:42:45.677 09:53:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61748 ]] 00:42:45.677 09:53:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61748 00:42:45.677 09:53:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61748 ']' 00:42:45.677 09:53:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61748 00:42:45.677 09:53:52 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:42:45.677 09:53:52 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:45.677 09:53:52 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61748 00:42:45.677 09:53:52 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:45.677 09:53:52 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:45.677 09:53:52 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61748' 00:42:45.677 killing process with pid 61748 00:42:45.677 09:53:52 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61748 00:42:45.677 09:53:52 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61748 00:42:45.937 09:53:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61778 ]] 00:42:45.937 09:53:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61778 00:42:45.937 09:53:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61778 ']' 00:42:45.937 09:53:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61778 00:42:45.937 09:53:52 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:42:45.937 09:53:52 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:45.937 09:53:52 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61778 00:42:45.937 09:53:52 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:42:45.937 09:53:52 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:42:45.937 killing process with pid 61778 00:42:45.937 09:53:52 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61778' 00:42:45.937 09:53:52 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61778 00:42:45.937 09:53:52 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61778 00:42:46.197 09:53:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:42:46.197 09:53:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:42:46.197 09:53:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61748 ]] 00:42:46.197 09:53:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61748 00:42:46.197 09:53:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61748 ']' 00:42:46.197 09:53:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61748 00:42:46.197 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61748) - No such process 00:42:46.197 Process with pid 61748 is not found 00:42:46.197 09:53:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61748 is not found' 00:42:46.197 09:53:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61778 ]] 00:42:46.197 09:53:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61778 00:42:46.197 09:53:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61778 ']' 00:42:46.197 09:53:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61778 00:42:46.197 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61778) - No such process 00:42:46.197 Process with pid 61778 is not found 00:42:46.197 09:53:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61778 is not found' 00:42:46.197 09:53:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:42:46.197 00:42:46.197 real 0m18.877s 00:42:46.197 user 0m33.445s 00:42:46.197 sys 0m5.090s 00:42:46.197 09:53:53 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:46.197 09:53:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:42:46.197 ************************************ 00:42:46.197 END TEST cpu_locks 00:42:46.197 ************************************ 00:42:46.455 00:42:46.455 real 0m46.521s 00:42:46.455 user 1m29.638s 00:42:46.455 sys 0m9.237s 00:42:46.455 09:53:53 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:46.455 09:53:53 event -- common/autotest_common.sh@10 -- # set +x 00:42:46.455 ************************************ 00:42:46.455 END TEST event 00:42:46.455 ************************************ 00:42:46.455 09:53:53 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:42:46.455 09:53:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:46.455 09:53:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:46.455 09:53:53 -- common/autotest_common.sh@10 -- # set +x 00:42:46.455 ************************************ 00:42:46.455 START TEST thread 00:42:46.455 ************************************ 00:42:46.455 09:53:53 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:42:46.455 * Looking for test storage... 00:42:46.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:42:46.455 09:53:53 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:46.455 09:53:53 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:42:46.455 09:53:53 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:46.714 09:53:53 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:46.714 09:53:53 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:46.714 09:53:53 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:46.714 09:53:53 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:46.714 09:53:53 thread -- scripts/common.sh@336 -- # IFS=.-: 00:42:46.714 09:53:53 thread -- scripts/common.sh@336 -- # read -ra ver1 00:42:46.714 09:53:53 thread -- scripts/common.sh@337 -- # IFS=.-: 00:42:46.714 09:53:53 thread -- scripts/common.sh@337 -- # read -ra ver2 00:42:46.714 09:53:53 thread -- scripts/common.sh@338 -- # local 'op=<' 00:42:46.714 09:53:53 thread -- scripts/common.sh@340 -- # ver1_l=2 00:42:46.714 09:53:53 thread -- scripts/common.sh@341 -- # ver2_l=1 00:42:46.714 09:53:53 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:46.714 09:53:53 thread -- scripts/common.sh@344 -- # case "$op" in 00:42:46.714 09:53:53 thread -- scripts/common.sh@345 -- # : 1 00:42:46.714 09:53:53 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:46.714 09:53:53 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:46.714 09:53:53 thread -- scripts/common.sh@365 -- # decimal 1 00:42:46.714 09:53:53 thread -- scripts/common.sh@353 -- # local d=1 00:42:46.714 09:53:53 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:46.714 09:53:53 thread -- scripts/common.sh@355 -- # echo 1 00:42:46.714 09:53:53 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:42:46.714 09:53:53 thread -- scripts/common.sh@366 -- # decimal 2 00:42:46.714 09:53:53 thread -- scripts/common.sh@353 -- # local d=2 00:42:46.714 09:53:53 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:46.714 09:53:53 thread -- scripts/common.sh@355 -- # echo 2 00:42:46.714 09:53:53 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:42:46.714 09:53:53 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:46.714 09:53:53 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:46.714 09:53:53 thread -- scripts/common.sh@368 -- # return 0 00:42:46.714 09:53:53 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:46.714 09:53:53 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:46.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.714 --rc genhtml_branch_coverage=1 00:42:46.714 --rc genhtml_function_coverage=1 00:42:46.714 --rc genhtml_legend=1 00:42:46.714 --rc geninfo_all_blocks=1 00:42:46.714 --rc geninfo_unexecuted_blocks=1 00:42:46.714 00:42:46.714 ' 00:42:46.714 09:53:53 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:46.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.714 --rc genhtml_branch_coverage=1 00:42:46.714 --rc genhtml_function_coverage=1 00:42:46.714 --rc genhtml_legend=1 00:42:46.714 --rc geninfo_all_blocks=1 00:42:46.714 --rc geninfo_unexecuted_blocks=1 00:42:46.714 00:42:46.714 ' 00:42:46.714 09:53:53 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:46.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.714 --rc genhtml_branch_coverage=1 00:42:46.714 --rc genhtml_function_coverage=1 00:42:46.714 --rc genhtml_legend=1 00:42:46.714 --rc geninfo_all_blocks=1 00:42:46.714 --rc geninfo_unexecuted_blocks=1 00:42:46.714 00:42:46.714 ' 00:42:46.714 09:53:53 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:46.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.714 --rc genhtml_branch_coverage=1 00:42:46.714 --rc genhtml_function_coverage=1 00:42:46.714 --rc genhtml_legend=1 00:42:46.715 --rc geninfo_all_blocks=1 00:42:46.715 --rc geninfo_unexecuted_blocks=1 00:42:46.715 00:42:46.715 ' 00:42:46.715 09:53:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:42:46.715 09:53:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:42:46.715 09:53:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:46.715 09:53:53 thread -- common/autotest_common.sh@10 -- # set +x 00:42:46.715 ************************************ 00:42:46.715 START TEST thread_poller_perf 00:42:46.715 ************************************ 00:42:46.715 09:53:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:42:46.715 [2024-12-09 09:53:53.578753] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:46.715 [2024-12-09 09:53:53.578885] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61933 ] 00:42:46.715 [2024-12-09 09:53:53.735508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:46.973 [2024-12-09 09:53:53.793161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:46.973 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:42:47.908 [2024-12-09T09:53:54.952Z] ====================================== 00:42:47.908 [2024-12-09T09:53:54.952Z] busy:2297319596 (cyc) 00:42:47.908 [2024-12-09T09:53:54.952Z] total_run_count: 387000 00:42:47.908 [2024-12-09T09:53:54.952Z] tsc_hz: 2290000000 (cyc) 00:42:47.908 [2024-12-09T09:53:54.952Z] ====================================== 00:42:47.908 [2024-12-09T09:53:54.952Z] poller_cost: 5936 (cyc), 2592 (nsec) 00:42:47.908 00:42:47.908 real 0m1.280s 00:42:47.908 user 0m1.135s 00:42:47.908 sys 0m0.039s 00:42:47.908 09:53:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:47.908 09:53:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:42:47.908 ************************************ 00:42:47.908 END TEST thread_poller_perf 00:42:47.908 ************************************ 00:42:47.908 09:53:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:42:47.908 09:53:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:42:47.908 09:53:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:47.908 09:53:54 thread -- common/autotest_common.sh@10 -- # set +x 00:42:47.908 ************************************ 00:42:47.908 START TEST thread_poller_perf 00:42:47.908 ************************************ 00:42:47.908 09:53:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:42:47.908 [2024-12-09 09:53:54.929525] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:47.908 [2024-12-09 09:53:54.929615] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61968 ] 00:42:48.167 [2024-12-09 09:53:55.083575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:48.167 [2024-12-09 09:53:55.136864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:48.167 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:42:49.546 [2024-12-09T09:53:56.590Z] ====================================== 00:42:49.546 [2024-12-09T09:53:56.590Z] busy:2292025518 (cyc) 00:42:49.546 [2024-12-09T09:53:56.590Z] total_run_count: 4681000 00:42:49.546 [2024-12-09T09:53:56.590Z] tsc_hz: 2290000000 (cyc) 00:42:49.546 [2024-12-09T09:53:56.590Z] ====================================== 00:42:49.546 [2024-12-09T09:53:56.590Z] poller_cost: 489 (cyc), 213 (nsec) 00:42:49.546 00:42:49.546 real 0m1.280s 00:42:49.546 user 0m1.129s 00:42:49.546 sys 0m0.045s 00:42:49.546 09:53:56 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:49.546 09:53:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:42:49.546 ************************************ 00:42:49.546 END TEST thread_poller_perf 00:42:49.546 ************************************ 00:42:49.546 09:53:56 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:42:49.546 00:42:49.546 real 0m2.908s 00:42:49.546 user 0m2.427s 00:42:49.546 sys 0m0.286s 00:42:49.546 09:53:56 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:49.546 09:53:56 thread -- common/autotest_common.sh@10 -- # set +x 00:42:49.546 ************************************ 00:42:49.546 END TEST thread 00:42:49.546 ************************************ 00:42:49.546 09:53:56 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:42:49.546 09:53:56 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:42:49.546 09:53:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:49.546 09:53:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:49.546 09:53:56 -- common/autotest_common.sh@10 -- # set +x 00:42:49.546 ************************************ 00:42:49.546 START TEST app_cmdline 00:42:49.546 ************************************ 00:42:49.546 09:53:56 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:42:49.546 * Looking for test storage... 00:42:49.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:42:49.546 09:53:56 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:49.546 09:53:56 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:42:49.546 09:53:56 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:49.546 09:53:56 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@345 -- # : 1 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:42:49.546 09:53:56 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:42:49.547 09:53:56 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:49.547 09:53:56 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:42:49.547 09:53:56 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:42:49.547 09:53:56 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:42:49.547 09:53:56 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:42:49.547 09:53:56 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:49.547 09:53:56 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:42:49.547 09:53:56 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:42:49.547 09:53:56 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:49.547 09:53:56 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:49.547 09:53:56 app_cmdline -- scripts/common.sh@368 -- # return 0 00:42:49.547 09:53:56 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:49.547 09:53:56 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:49.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:49.547 --rc genhtml_branch_coverage=1 00:42:49.547 --rc genhtml_function_coverage=1 00:42:49.547 --rc genhtml_legend=1 00:42:49.547 --rc geninfo_all_blocks=1 00:42:49.547 --rc geninfo_unexecuted_blocks=1 00:42:49.547 00:42:49.547 ' 00:42:49.547 09:53:56 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:49.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:49.547 --rc genhtml_branch_coverage=1 00:42:49.547 --rc genhtml_function_coverage=1 00:42:49.547 --rc genhtml_legend=1 00:42:49.547 --rc geninfo_all_blocks=1 00:42:49.547 --rc geninfo_unexecuted_blocks=1 00:42:49.547 00:42:49.547 ' 00:42:49.547 09:53:56 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:49.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:49.547 --rc genhtml_branch_coverage=1 00:42:49.547 --rc genhtml_function_coverage=1 00:42:49.547 --rc genhtml_legend=1 00:42:49.547 --rc geninfo_all_blocks=1 00:42:49.547 --rc geninfo_unexecuted_blocks=1 00:42:49.547 00:42:49.547 ' 00:42:49.547 09:53:56 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:49.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:49.547 --rc genhtml_branch_coverage=1 00:42:49.547 --rc genhtml_function_coverage=1 00:42:49.547 --rc genhtml_legend=1 00:42:49.547 --rc geninfo_all_blocks=1 00:42:49.547 --rc geninfo_unexecuted_blocks=1 00:42:49.547 00:42:49.547 ' 00:42:49.547 09:53:56 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:42:49.547 09:53:56 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62046 00:42:49.547 09:53:56 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:42:49.547 09:53:56 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62046 00:42:49.547 09:53:56 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 62046 ']' 00:42:49.547 09:53:56 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:49.547 09:53:56 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:49.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:49.547 09:53:56 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:49.547 09:53:56 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:49.547 09:53:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:42:49.547 [2024-12-09 09:53:56.585719] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:49.547 [2024-12-09 09:53:56.585801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62046 ] 00:42:49.807 [2024-12-09 09:53:56.735550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:49.807 [2024-12-09 09:53:56.791079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:50.744 09:53:57 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:50.744 09:53:57 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:42:50.744 09:53:57 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:42:50.744 { 00:42:50.744 "fields": { 00:42:50.744 "commit": "b71c8b8dd", 00:42:50.744 "major": 25, 00:42:50.744 "minor": 1, 00:42:50.744 "patch": 0, 00:42:50.744 "suffix": "-pre" 00:42:50.744 }, 00:42:50.744 "version": "SPDK v25.01-pre git sha1 b71c8b8dd" 00:42:50.744 } 00:42:50.744 09:53:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:42:50.744 09:53:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:42:50.744 09:53:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:42:50.744 09:53:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:42:50.744 09:53:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:42:50.744 09:53:57 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.744 09:53:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:42:50.744 09:53:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:42:50.744 09:53:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:42:50.744 09:53:57 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.005 09:53:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:42:51.005 09:53:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:42:51.005 09:53:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:42:51.005 09:53:57 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:42:51.005 09:53:57 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:42:51.005 09:53:57 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:51.005 09:53:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:51.005 09:53:57 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:51.005 09:53:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:51.005 09:53:57 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:51.005 09:53:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:51.005 09:53:57 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:51.005 09:53:57 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:42:51.005 09:53:57 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:42:51.005 2024/12/09 09:53:57 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:42:51.005 request: 00:42:51.005 { 00:42:51.005 "method": "env_dpdk_get_mem_stats", 00:42:51.005 "params": {} 00:42:51.005 } 00:42:51.005 Got JSON-RPC error response 00:42:51.005 GoRPCClient: error on JSON-RPC call 00:42:51.005 09:53:58 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:42:51.005 09:53:58 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:51.005 09:53:58 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:51.005 09:53:58 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:51.005 09:53:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62046 00:42:51.005 09:53:58 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 62046 ']' 00:42:51.005 09:53:58 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 62046 00:42:51.005 09:53:58 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:42:51.005 09:53:58 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:51.005 09:53:58 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62046 00:42:51.264 09:53:58 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:51.264 killing process with pid 62046 00:42:51.264 09:53:58 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:51.264 09:53:58 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62046' 00:42:51.264 09:53:58 app_cmdline -- common/autotest_common.sh@973 -- # kill 62046 00:42:51.264 09:53:58 app_cmdline -- common/autotest_common.sh@978 -- # wait 62046 00:42:51.524 00:42:51.524 real 0m2.085s 00:42:51.524 user 0m2.463s 00:42:51.524 sys 0m0.542s 00:42:51.524 09:53:58 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:51.524 09:53:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:42:51.524 ************************************ 00:42:51.524 END TEST app_cmdline 00:42:51.524 ************************************ 00:42:51.524 09:53:58 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:42:51.524 09:53:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:51.524 09:53:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:51.524 09:53:58 -- common/autotest_common.sh@10 -- # set +x 00:42:51.524 ************************************ 00:42:51.524 START TEST version 00:42:51.524 ************************************ 00:42:51.524 09:53:58 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:42:51.524 * Looking for test storage... 00:42:51.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:42:51.524 09:53:58 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:51.783 09:53:58 version -- common/autotest_common.sh@1711 -- # lcov --version 00:42:51.783 09:53:58 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:51.783 09:53:58 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:51.783 09:53:58 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:51.783 09:53:58 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:51.783 09:53:58 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:51.783 09:53:58 version -- scripts/common.sh@336 -- # IFS=.-: 00:42:51.783 09:53:58 version -- scripts/common.sh@336 -- # read -ra ver1 00:42:51.783 09:53:58 version -- scripts/common.sh@337 -- # IFS=.-: 00:42:51.783 09:53:58 version -- scripts/common.sh@337 -- # read -ra ver2 00:42:51.783 09:53:58 version -- scripts/common.sh@338 -- # local 'op=<' 00:42:51.783 09:53:58 version -- scripts/common.sh@340 -- # ver1_l=2 00:42:51.783 09:53:58 version -- scripts/common.sh@341 -- # ver2_l=1 00:42:51.783 09:53:58 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:51.783 09:53:58 version -- scripts/common.sh@344 -- # case "$op" in 00:42:51.783 09:53:58 version -- scripts/common.sh@345 -- # : 1 00:42:51.783 09:53:58 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:51.783 09:53:58 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:51.783 09:53:58 version -- scripts/common.sh@365 -- # decimal 1 00:42:51.783 09:53:58 version -- scripts/common.sh@353 -- # local d=1 00:42:51.783 09:53:58 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:51.783 09:53:58 version -- scripts/common.sh@355 -- # echo 1 00:42:51.783 09:53:58 version -- scripts/common.sh@365 -- # ver1[v]=1 00:42:51.783 09:53:58 version -- scripts/common.sh@366 -- # decimal 2 00:42:51.784 09:53:58 version -- scripts/common.sh@353 -- # local d=2 00:42:51.784 09:53:58 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:51.784 09:53:58 version -- scripts/common.sh@355 -- # echo 2 00:42:51.784 09:53:58 version -- scripts/common.sh@366 -- # ver2[v]=2 00:42:51.784 09:53:58 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:51.784 09:53:58 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:51.784 09:53:58 version -- scripts/common.sh@368 -- # return 0 00:42:51.784 09:53:58 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:51.784 09:53:58 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:51.784 --rc genhtml_branch_coverage=1 00:42:51.784 --rc genhtml_function_coverage=1 00:42:51.784 --rc genhtml_legend=1 00:42:51.784 --rc geninfo_all_blocks=1 00:42:51.784 --rc geninfo_unexecuted_blocks=1 00:42:51.784 00:42:51.784 ' 00:42:51.784 09:53:58 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:51.784 --rc genhtml_branch_coverage=1 00:42:51.784 --rc genhtml_function_coverage=1 00:42:51.784 --rc genhtml_legend=1 00:42:51.784 --rc geninfo_all_blocks=1 00:42:51.784 --rc geninfo_unexecuted_blocks=1 00:42:51.784 00:42:51.784 ' 00:42:51.784 09:53:58 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:51.784 --rc genhtml_branch_coverage=1 00:42:51.784 --rc genhtml_function_coverage=1 00:42:51.784 --rc genhtml_legend=1 00:42:51.784 --rc geninfo_all_blocks=1 00:42:51.784 --rc geninfo_unexecuted_blocks=1 00:42:51.784 00:42:51.784 ' 00:42:51.784 09:53:58 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:51.784 --rc genhtml_branch_coverage=1 00:42:51.784 --rc genhtml_function_coverage=1 00:42:51.784 --rc genhtml_legend=1 00:42:51.784 --rc geninfo_all_blocks=1 00:42:51.784 --rc geninfo_unexecuted_blocks=1 00:42:51.784 00:42:51.784 ' 00:42:51.784 09:53:58 version -- app/version.sh@17 -- # get_header_version major 00:42:51.784 09:53:58 version -- app/version.sh@14 -- # tr -d '"' 00:42:51.784 09:53:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:42:51.784 09:53:58 version -- app/version.sh@14 -- # cut -f2 00:42:51.784 09:53:58 version -- app/version.sh@17 -- # major=25 00:42:51.784 09:53:58 version -- app/version.sh@18 -- # get_header_version minor 00:42:51.784 09:53:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:42:51.784 09:53:58 version -- app/version.sh@14 -- # cut -f2 00:42:51.784 09:53:58 version -- app/version.sh@14 -- # tr -d '"' 00:42:51.784 09:53:58 version -- app/version.sh@18 -- # minor=1 00:42:51.784 09:53:58 version -- app/version.sh@19 -- # get_header_version patch 00:42:51.784 09:53:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:42:51.784 09:53:58 version -- app/version.sh@14 -- # cut -f2 00:42:51.784 09:53:58 version -- app/version.sh@14 -- # tr -d '"' 00:42:51.784 09:53:58 version -- app/version.sh@19 -- # patch=0 00:42:51.784 09:53:58 version -- app/version.sh@20 -- # get_header_version suffix 00:42:51.784 09:53:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:42:51.784 09:53:58 version -- app/version.sh@14 -- # cut -f2 00:42:51.784 09:53:58 version -- app/version.sh@14 -- # tr -d '"' 00:42:51.784 09:53:58 version -- app/version.sh@20 -- # suffix=-pre 00:42:51.784 09:53:58 version -- app/version.sh@22 -- # version=25.1 00:42:51.784 09:53:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:42:51.784 09:53:58 version -- app/version.sh@28 -- # version=25.1rc0 00:42:51.784 09:53:58 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:42:51.784 09:53:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:42:51.784 09:53:58 version -- app/version.sh@30 -- # py_version=25.1rc0 00:42:51.784 09:53:58 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:42:51.784 00:42:51.784 real 0m0.306s 00:42:51.784 user 0m0.175s 00:42:51.784 sys 0m0.189s 00:42:51.784 09:53:58 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:51.784 09:53:58 version -- common/autotest_common.sh@10 -- # set +x 00:42:51.784 ************************************ 00:42:51.784 END TEST version 00:42:51.784 ************************************ 00:42:51.784 09:53:58 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:42:51.784 09:53:58 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:42:51.784 09:53:58 -- spdk/autotest.sh@194 -- # uname -s 00:42:51.784 09:53:58 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:42:51.784 09:53:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:42:51.784 09:53:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:42:51.784 09:53:58 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:42:51.784 09:53:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:42:51.784 09:53:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:42:51.784 09:53:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:51.784 09:53:58 -- common/autotest_common.sh@10 -- # set +x 00:42:52.044 09:53:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:42:52.044 09:53:58 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:42:52.044 09:53:58 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:42:52.044 09:53:58 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:42:52.044 09:53:58 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:42:52.044 09:53:58 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:42:52.044 09:53:58 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:42:52.044 09:53:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:52.044 09:53:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:52.044 09:53:58 -- common/autotest_common.sh@10 -- # set +x 00:42:52.044 ************************************ 00:42:52.044 START TEST nvmf_tcp 00:42:52.044 ************************************ 00:42:52.044 09:53:58 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:42:52.044 * Looking for test storage... 00:42:52.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:42:52.044 09:53:58 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:52.044 09:53:58 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:42:52.044 09:53:58 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:52.044 09:53:59 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:52.044 09:53:59 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:52.304 09:53:59 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:52.304 09:53:59 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:52.304 09:53:59 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:52.304 09:53:59 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:52.304 09:53:59 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:52.304 09:53:59 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:52.304 09:53:59 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:52.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.304 --rc genhtml_branch_coverage=1 00:42:52.304 --rc genhtml_function_coverage=1 00:42:52.304 --rc genhtml_legend=1 00:42:52.304 --rc geninfo_all_blocks=1 00:42:52.304 --rc geninfo_unexecuted_blocks=1 00:42:52.304 00:42:52.304 ' 00:42:52.304 09:53:59 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:52.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.304 --rc genhtml_branch_coverage=1 00:42:52.304 --rc genhtml_function_coverage=1 00:42:52.304 --rc genhtml_legend=1 00:42:52.304 --rc geninfo_all_blocks=1 00:42:52.304 --rc geninfo_unexecuted_blocks=1 00:42:52.304 00:42:52.304 ' 00:42:52.304 09:53:59 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:52.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.304 --rc genhtml_branch_coverage=1 00:42:52.304 --rc genhtml_function_coverage=1 00:42:52.304 --rc genhtml_legend=1 00:42:52.304 --rc geninfo_all_blocks=1 00:42:52.304 --rc geninfo_unexecuted_blocks=1 00:42:52.304 00:42:52.304 ' 00:42:52.304 09:53:59 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:52.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.304 --rc genhtml_branch_coverage=1 00:42:52.304 --rc genhtml_function_coverage=1 00:42:52.304 --rc genhtml_legend=1 00:42:52.304 --rc geninfo_all_blocks=1 00:42:52.304 --rc geninfo_unexecuted_blocks=1 00:42:52.304 00:42:52.304 ' 00:42:52.304 09:53:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:42:52.304 09:53:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:42:52.304 09:53:59 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:42:52.304 09:53:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:52.304 09:53:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:52.304 09:53:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:52.304 ************************************ 00:42:52.304 START TEST nvmf_target_core 00:42:52.304 ************************************ 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:42:52.304 * Looking for test storage... 00:42:52.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:52.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.304 --rc genhtml_branch_coverage=1 00:42:52.304 --rc genhtml_function_coverage=1 00:42:52.304 --rc genhtml_legend=1 00:42:52.304 --rc geninfo_all_blocks=1 00:42:52.304 --rc geninfo_unexecuted_blocks=1 00:42:52.304 00:42:52.304 ' 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:52.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.304 --rc genhtml_branch_coverage=1 00:42:52.304 --rc genhtml_function_coverage=1 00:42:52.304 --rc genhtml_legend=1 00:42:52.304 --rc geninfo_all_blocks=1 00:42:52.304 --rc geninfo_unexecuted_blocks=1 00:42:52.304 00:42:52.304 ' 00:42:52.304 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:52.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.305 --rc genhtml_branch_coverage=1 00:42:52.305 --rc genhtml_function_coverage=1 00:42:52.305 --rc genhtml_legend=1 00:42:52.305 --rc geninfo_all_blocks=1 00:42:52.305 --rc geninfo_unexecuted_blocks=1 00:42:52.305 00:42:52.305 ' 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:52.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.305 --rc genhtml_branch_coverage=1 00:42:52.305 --rc genhtml_function_coverage=1 00:42:52.305 --rc genhtml_legend=1 00:42:52.305 --rc geninfo_all_blocks=1 00:42:52.305 --rc geninfo_unexecuted_blocks=1 00:42:52.305 00:42:52.305 ' 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:52.305 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:52.565 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:42:52.565 ************************************ 00:42:52.565 START TEST nvmf_abort 00:42:52.565 ************************************ 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:42:52.565 * Looking for test storage... 00:42:52.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:52.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.565 --rc genhtml_branch_coverage=1 00:42:52.565 --rc genhtml_function_coverage=1 00:42:52.565 --rc genhtml_legend=1 00:42:52.565 --rc geninfo_all_blocks=1 00:42:52.565 --rc geninfo_unexecuted_blocks=1 00:42:52.565 00:42:52.565 ' 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:52.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.565 --rc genhtml_branch_coverage=1 00:42:52.565 --rc genhtml_function_coverage=1 00:42:52.565 --rc genhtml_legend=1 00:42:52.565 --rc geninfo_all_blocks=1 00:42:52.565 --rc geninfo_unexecuted_blocks=1 00:42:52.565 00:42:52.565 ' 00:42:52.565 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:52.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.566 --rc genhtml_branch_coverage=1 00:42:52.566 --rc genhtml_function_coverage=1 00:42:52.566 --rc genhtml_legend=1 00:42:52.566 --rc geninfo_all_blocks=1 00:42:52.566 --rc geninfo_unexecuted_blocks=1 00:42:52.566 00:42:52.566 ' 00:42:52.566 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:52.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.566 --rc genhtml_branch_coverage=1 00:42:52.566 --rc genhtml_function_coverage=1 00:42:52.566 --rc genhtml_legend=1 00:42:52.566 --rc geninfo_all_blocks=1 00:42:52.566 --rc geninfo_unexecuted_blocks=1 00:42:52.566 00:42:52.566 ' 00:42:52.566 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:52.566 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:42:52.566 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:52.566 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:52.566 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:52.566 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:52.566 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:52.826 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:52.826 Cannot find device "nvmf_init_br" 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:52.826 Cannot find device "nvmf_init_br2" 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:52.826 Cannot find device "nvmf_tgt_br" 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:52.826 Cannot find device "nvmf_tgt_br2" 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:52.826 Cannot find device "nvmf_init_br" 00:42:52.826 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:52.827 Cannot find device "nvmf_init_br2" 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:52.827 Cannot find device "nvmf_tgt_br" 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:52.827 Cannot find device "nvmf_tgt_br2" 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:52.827 Cannot find device "nvmf_br" 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:52.827 Cannot find device "nvmf_init_if" 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:52.827 Cannot find device "nvmf_init_if2" 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:52.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:52.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:52.827 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:53.162 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:53.162 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:53.162 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:42:53.162 00:42:53.162 --- 10.0.0.3 ping statistics --- 00:42:53.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:53.162 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:53.162 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:53.162 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:42:53.162 00:42:53.162 --- 10.0.0.4 ping statistics --- 00:42:53.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:53.162 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:53.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:53.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:42:53.162 00:42:53.162 --- 10.0.0.1 ping statistics --- 00:42:53.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:53.162 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:53.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:53.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:42:53.162 00:42:53.162 --- 10.0.0.2 ping statistics --- 00:42:53.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:53.162 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:53.162 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:53.483 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=62499 00:42:53.483 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 62499 00:42:53.483 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:42:53.483 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 62499 ']' 00:42:53.483 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:53.483 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:53.483 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:53.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:53.483 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:53.483 09:54:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:53.483 [2024-12-09 09:54:00.241000] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:53.483 [2024-12-09 09:54:00.241096] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:53.483 [2024-12-09 09:54:00.397739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:53.483 [2024-12-09 09:54:00.455354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:53.483 [2024-12-09 09:54:00.455415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:53.483 [2024-12-09 09:54:00.455423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:53.483 [2024-12-09 09:54:00.455428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:53.483 [2024-12-09 09:54:00.455433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:53.483 [2024-12-09 09:54:00.456452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:53.483 [2024-12-09 09:54:00.456339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:53.483 [2024-12-09 09:54:00.456456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:54.421 [2024-12-09 09:54:01.200817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:54.421 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:54.421 Malloc0 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:54.422 Delay0 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:54.422 [2024-12-09 09:54:01.284539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:54.422 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:42:54.681 [2024-12-09 09:54:01.481959] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:56.590 Initializing NVMe Controllers 00:42:56.590 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:42:56.590 controller IO queue size 128 less than required 00:42:56.590 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:42:56.590 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:42:56.590 Initialization complete. Launching workers. 00:42:56.590 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34930 00:42:56.590 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34991, failed to submit 62 00:42:56.590 success 34934, unsuccessful 57, failed 0 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:56.590 rmmod nvme_tcp 00:42:56.590 rmmod nvme_fabrics 00:42:56.590 rmmod nvme_keyring 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 62499 ']' 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 62499 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 62499 ']' 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 62499 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:42:56.590 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62499 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:56.850 killing process with pid 62499 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62499' 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 62499 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 62499 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:42:56.850 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:42:57.109 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:42:57.109 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:42:57.109 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:42:57.109 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:42:57.109 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:42:57.109 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:42:57.109 09:54:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:42:57.109 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:42:57.109 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:42:57.109 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:57.109 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:57.109 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:42:57.109 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:57.109 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:57.109 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:57.369 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:42:57.369 00:42:57.369 real 0m4.771s 00:42:57.369 user 0m12.381s 00:42:57.369 sys 0m1.177s 00:42:57.369 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:57.369 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:57.369 ************************************ 00:42:57.369 END TEST nvmf_abort 00:42:57.369 ************************************ 00:42:57.369 09:54:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:42:57.369 09:54:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:57.369 09:54:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:57.369 09:54:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:42:57.369 ************************************ 00:42:57.369 START TEST nvmf_ns_hotplug_stress 00:42:57.369 ************************************ 00:42:57.369 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:42:57.369 * Looking for test storage... 00:42:57.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:57.369 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:57.369 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:42:57.369 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:57.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:57.630 --rc genhtml_branch_coverage=1 00:42:57.630 --rc genhtml_function_coverage=1 00:42:57.630 --rc genhtml_legend=1 00:42:57.630 --rc geninfo_all_blocks=1 00:42:57.630 --rc geninfo_unexecuted_blocks=1 00:42:57.630 00:42:57.630 ' 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:57.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:57.630 --rc genhtml_branch_coverage=1 00:42:57.630 --rc genhtml_function_coverage=1 00:42:57.630 --rc genhtml_legend=1 00:42:57.630 --rc geninfo_all_blocks=1 00:42:57.630 --rc geninfo_unexecuted_blocks=1 00:42:57.630 00:42:57.630 ' 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:57.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:57.630 --rc genhtml_branch_coverage=1 00:42:57.630 --rc genhtml_function_coverage=1 00:42:57.630 --rc genhtml_legend=1 00:42:57.630 --rc geninfo_all_blocks=1 00:42:57.630 --rc geninfo_unexecuted_blocks=1 00:42:57.630 00:42:57.630 ' 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:57.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:57.630 --rc genhtml_branch_coverage=1 00:42:57.630 --rc genhtml_function_coverage=1 00:42:57.630 --rc genhtml_legend=1 00:42:57.630 --rc geninfo_all_blocks=1 00:42:57.630 --rc geninfo_unexecuted_blocks=1 00:42:57.630 00:42:57.630 ' 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:57.630 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:57.631 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:57.631 Cannot find device "nvmf_init_br" 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:57.631 Cannot find device "nvmf_init_br2" 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:57.631 Cannot find device "nvmf_tgt_br" 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:57.631 Cannot find device "nvmf_tgt_br2" 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:57.631 Cannot find device "nvmf_init_br" 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:57.631 Cannot find device "nvmf_init_br2" 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:57.631 Cannot find device "nvmf_tgt_br" 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:57.631 Cannot find device "nvmf_tgt_br2" 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:57.631 Cannot find device "nvmf_br" 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:57.631 Cannot find device "nvmf_init_if" 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:57.631 Cannot find device "nvmf_init_if2" 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:57.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:57.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:57.631 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:57.891 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:57.891 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:42:57.891 00:42:57.891 --- 10.0.0.3 ping statistics --- 00:42:57.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:57.891 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:42:57.891 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:57.891 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:57.892 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:42:57.892 00:42:57.892 --- 10.0.0.4 ping statistics --- 00:42:57.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:57.892 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:57.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:57.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:42:57.892 00:42:57.892 --- 10.0.0.1 ping statistics --- 00:42:57.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:57.892 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:57.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:57.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:42:57.892 00:42:57.892 --- 10.0.0.2 ping statistics --- 00:42:57.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:57.892 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=62814 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 62814 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 62814 ']' 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:57.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:57.892 09:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:57.892 [2024-12-09 09:54:04.932793] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:42:57.892 [2024-12-09 09:54:04.932873] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:58.155 [2024-12-09 09:54:05.083112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:58.155 [2024-12-09 09:54:05.136731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:58.155 [2024-12-09 09:54:05.136803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:58.155 [2024-12-09 09:54:05.136810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:58.155 [2024-12-09 09:54:05.136815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:58.155 [2024-12-09 09:54:05.136819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:58.155 [2024-12-09 09:54:05.137595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:58.155 [2024-12-09 09:54:05.137949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:58.155 [2024-12-09 09:54:05.137953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:59.095 09:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:59.095 09:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:42:59.095 09:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:59.095 09:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:59.095 09:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:59.095 09:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:59.095 09:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:42:59.095 09:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:59.095 [2024-12-09 09:54:06.085012] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:59.095 09:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:59.355 09:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:42:59.614 [2024-12-09 09:54:06.529219] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:59.614 09:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:42:59.875 09:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:43:00.134 Malloc0 00:43:00.134 09:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:43:00.407 Delay0 00:43:00.407 09:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:00.715 09:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:43:00.715 NULL1 00:43:00.715 09:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:43:00.975 09:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62940 00:43:00.975 09:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:43:00.975 09:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:00.975 09:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:02.355 Read completed with error (sct=0, sc=11) 00:43:02.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:43:02.355 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:02.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:43:02.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:43:02.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:43:02.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:43:02.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:43:02.615 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:43:02.615 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:43:02.615 true 00:43:02.615 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:02.615 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:03.554 09:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:03.814 09:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:43:03.814 09:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:43:03.814 true 00:43:04.074 09:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:04.074 09:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:04.074 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:04.334 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:43:04.334 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:43:04.594 true 00:43:04.594 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:04.594 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:05.537 09:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:05.798 09:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:43:05.798 09:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:43:06.058 true 00:43:06.058 09:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:06.058 09:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:06.318 09:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:06.578 09:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:43:06.578 09:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:43:06.578 true 00:43:06.578 09:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:06.578 09:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:07.517 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:07.777 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:43:07.777 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:43:08.037 true 00:43:08.037 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:08.037 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:08.297 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:08.557 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:43:08.557 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:43:08.816 true 00:43:08.816 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:08.816 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:09.757 09:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:09.757 09:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:43:09.757 09:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:43:10.016 true 00:43:10.016 09:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:10.016 09:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:10.275 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:10.534 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:43:10.534 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:43:10.793 true 00:43:10.793 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:10.793 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:11.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:43:11.732 09:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:11.732 09:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:43:11.732 09:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:43:11.991 true 00:43:11.991 09:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:11.991 09:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:12.250 09:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:12.508 09:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:43:12.508 09:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:43:12.766 true 00:43:12.766 09:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:12.766 09:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:13.704 09:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:13.704 09:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:43:13.704 09:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:43:13.963 true 00:43:13.963 09:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:13.963 09:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:14.224 09:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:14.486 09:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:43:14.486 09:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:43:14.746 true 00:43:14.746 09:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:14.746 09:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:15.005 09:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:15.263 09:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:43:15.263 09:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:43:15.522 true 00:43:15.523 09:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:15.523 09:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:16.460 09:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:16.719 09:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:43:16.719 09:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:43:16.978 true 00:43:16.978 09:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:16.978 09:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:17.237 09:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:17.496 09:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:43:17.496 09:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:43:17.754 true 00:43:17.754 09:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:17.754 09:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:18.691 09:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:18.950 09:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:43:18.950 09:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:43:18.950 true 00:43:18.950 09:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:18.950 09:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:19.210 09:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:19.779 09:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:43:19.779 09:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:43:19.779 true 00:43:19.779 09:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:19.779 09:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:20.037 09:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:20.296 09:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:43:20.296 09:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:43:20.555 true 00:43:20.555 09:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:20.555 09:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:21.568 09:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:21.827 09:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:43:21.827 09:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:43:22.087 true 00:43:22.087 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:22.087 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:22.347 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:22.607 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:43:22.607 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:43:22.866 true 00:43:22.866 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:22.866 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:23.803 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:23.803 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:43:23.803 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:43:24.063 true 00:43:24.063 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:24.063 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:24.323 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:24.583 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:43:24.583 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:43:24.842 true 00:43:24.842 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:24.842 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:25.779 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:25.779 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:43:25.779 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:43:26.037 true 00:43:26.037 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:26.038 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:26.297 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:26.556 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:43:26.556 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:43:26.816 true 00:43:26.816 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:26.816 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:27.754 09:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:27.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:43:28.013 09:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:43:28.013 09:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:43:28.013 true 00:43:28.013 09:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:28.013 09:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:28.304 09:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:28.562 09:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:43:28.562 09:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:43:28.821 true 00:43:28.821 09:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:28.821 09:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:29.758 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:30.016 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:43:30.016 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:43:30.016 true 00:43:30.276 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:30.276 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:30.276 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:30.535 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:43:30.535 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:43:30.793 true 00:43:30.793 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:30.793 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:31.729 Initializing NVMe Controllers 00:43:31.729 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:43:31.729 Controller IO queue size 128, less than required. 00:43:31.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:31.729 Controller IO queue size 128, less than required. 00:43:31.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:31.729 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:43:31.729 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:43:31.729 Initialization complete. Launching workers. 00:43:31.729 ======================================================== 00:43:31.729 Latency(us) 00:43:31.729 Device Information : IOPS MiB/s Average min max 00:43:31.729 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 335.23 0.16 205938.58 3291.09 1026551.08 00:43:31.729 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11309.37 5.52 11317.84 3022.94 632742.13 00:43:31.729 ======================================================== 00:43:31.729 Total : 11644.60 5.69 16920.72 3022.94 1026551.08 00:43:31.729 00:43:31.729 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:31.989 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:43:31.989 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:43:32.248 true 00:43:32.248 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62940 00:43:32.248 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62940) - No such process 00:43:32.248 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62940 00:43:32.248 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:32.508 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:43:32.768 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:43:32.768 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:43:32.768 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:43:32.768 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:43:32.768 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:43:32.768 null0 00:43:32.768 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:43:32.768 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:43:32.768 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:43:33.027 null1 00:43:33.027 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:43:33.027 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:43:33.027 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:43:33.287 null2 00:43:33.287 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:43:33.287 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:43:33.287 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:43:33.546 null3 00:43:33.546 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:43:33.546 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:43:33.546 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:43:33.806 null4 00:43:33.806 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:43:33.806 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:43:33.806 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:43:34.069 null5 00:43:34.069 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:43:34.069 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:43:34.069 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:43:34.333 null6 00:43:34.333 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:43:34.333 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:43:34.333 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:43:34.593 null7 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:43:34.593 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 64009 64011 64012 64013 64017 64018 64020 64022 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:34.594 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:43:34.853 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:43:34.853 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:43:34.853 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:43:34.853 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:43:34.853 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:43:34.853 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:34.853 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:43:34.853 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:43:35.112 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.112 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.112 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:43:35.112 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.112 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.112 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:43:35.112 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.112 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.112 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.112 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:43:35.372 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:43:35.372 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:43:35.372 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:43:35.372 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:43:35.372 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:35.372 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:43:35.372 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.631 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:43:35.890 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:35.890 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:35.890 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:43:35.890 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:43:35.890 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:43:35.890 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:43:35.890 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:35.890 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:43:35.890 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:43:36.149 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:43:36.149 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.149 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.149 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.149 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:43:36.408 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.408 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.408 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:43:36.408 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:43:36.408 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:43:36.408 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:43:36.408 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:43:36.408 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:36.408 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.408 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.408 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:43:36.408 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:43:36.666 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:43:36.667 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.667 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.667 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:43:36.925 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:36.925 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:36.925 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:43:36.925 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:43:36.925 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:36.925 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:43:36.925 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:43:36.925 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:43:36.925 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:43:37.185 09:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.185 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:43:37.445 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.445 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.445 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:43:37.445 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:43:37.445 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:37.445 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:43:37.445 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:43:37.445 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:43:37.445 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:43:37.445 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.704 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:43:37.962 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.962 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.962 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:43:37.962 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.962 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.962 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:43:37.962 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:43:37.962 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:37.962 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:43:37.962 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:37.962 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:37.962 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:43:37.963 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:43:37.963 09:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:43:38.220 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:43:38.220 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:43:38.220 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:38.220 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:38.220 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:43:38.220 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:38.221 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:38.221 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:43:38.221 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:38.221 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:38.221 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:43:38.221 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:43:38.221 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:38.221 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:38.221 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:43:38.221 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:38.221 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:38.221 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:43:38.480 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:43:38.739 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:43:38.739 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:43:38.739 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:38.739 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:38.739 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:43:38.739 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:38.739 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:38.739 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:43:38.739 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:38.739 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:38.739 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:43:39.000 09:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.259 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:43:39.519 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:39.776 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:40.035 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:40.035 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:40.035 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:43:40.035 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:43:40.035 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:40.035 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:40.035 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:40.035 09:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:40.035 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:40.035 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:40.295 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:43:40.295 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:40.295 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:40.295 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:43:40.295 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:43:40.295 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:43:40.295 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:43:40.295 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:40.295 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:40.553 rmmod nvme_tcp 00:43:40.553 rmmod nvme_fabrics 00:43:40.553 rmmod nvme_keyring 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 62814 ']' 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 62814 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 62814 ']' 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 62814 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62814 00:43:40.553 killing process with pid 62814 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62814' 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 62814 00:43:40.553 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 62814 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:40.812 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:43:41.071 00:43:41.071 real 0m43.711s 00:43:41.071 user 3m29.756s 00:43:41.071 sys 0m11.646s 00:43:41.071 ************************************ 00:43:41.071 END TEST nvmf_ns_hotplug_stress 00:43:41.071 ************************************ 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:43:41.071 ************************************ 00:43:41.071 START TEST nvmf_delete_subsystem 00:43:41.071 ************************************ 00:43:41.071 09:54:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:43:41.071 * Looking for test storage... 00:43:41.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:41.071 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:41.071 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:43:41.071 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:41.333 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:41.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:41.334 --rc genhtml_branch_coverage=1 00:43:41.334 --rc genhtml_function_coverage=1 00:43:41.334 --rc genhtml_legend=1 00:43:41.334 --rc geninfo_all_blocks=1 00:43:41.334 --rc geninfo_unexecuted_blocks=1 00:43:41.334 00:43:41.334 ' 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:41.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:41.334 --rc genhtml_branch_coverage=1 00:43:41.334 --rc genhtml_function_coverage=1 00:43:41.334 --rc genhtml_legend=1 00:43:41.334 --rc geninfo_all_blocks=1 00:43:41.334 --rc geninfo_unexecuted_blocks=1 00:43:41.334 00:43:41.334 ' 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:41.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:41.334 --rc genhtml_branch_coverage=1 00:43:41.334 --rc genhtml_function_coverage=1 00:43:41.334 --rc genhtml_legend=1 00:43:41.334 --rc geninfo_all_blocks=1 00:43:41.334 --rc geninfo_unexecuted_blocks=1 00:43:41.334 00:43:41.334 ' 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:41.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:41.334 --rc genhtml_branch_coverage=1 00:43:41.334 --rc genhtml_function_coverage=1 00:43:41.334 --rc genhtml_legend=1 00:43:41.334 --rc geninfo_all_blocks=1 00:43:41.334 --rc geninfo_unexecuted_blocks=1 00:43:41.334 00:43:41.334 ' 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:41.334 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:41.334 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:41.335 Cannot find device "nvmf_init_br" 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:41.335 Cannot find device "nvmf_init_br2" 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:41.335 Cannot find device "nvmf_tgt_br" 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:41.335 Cannot find device "nvmf_tgt_br2" 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:41.335 Cannot find device "nvmf_init_br" 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:41.335 Cannot find device "nvmf_init_br2" 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:43:41.335 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:41.335 Cannot find device "nvmf_tgt_br" 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:41.594 Cannot find device "nvmf_tgt_br2" 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:41.594 Cannot find device "nvmf_br" 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:41.594 Cannot find device "nvmf_init_if" 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:41.594 Cannot find device "nvmf_init_if2" 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:41.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:41.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:41.594 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:41.854 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:41.854 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:43:41.854 00:43:41.854 --- 10.0.0.3 ping statistics --- 00:43:41.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:41.854 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:41.854 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:41.854 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:43:41.854 00:43:41.854 --- 10.0.0.4 ping statistics --- 00:43:41.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:41.854 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:41.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:41.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:43:41.854 00:43:41.854 --- 10.0.0.1 ping statistics --- 00:43:41.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:41.854 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:41.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:41.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:43:41.854 00:43:41.854 --- 10.0.0.2 ping statistics --- 00:43:41.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:41.854 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=65427 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 65427 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 65427 ']' 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:41.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:41.854 09:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:43:41.854 [2024-12-09 09:54:48.831704] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:41.854 [2024-12-09 09:54:48.831785] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:42.112 [2024-12-09 09:54:48.985031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:42.112 [2024-12-09 09:54:49.045856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:42.112 [2024-12-09 09:54:49.045928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:42.112 [2024-12-09 09:54:49.045942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:42.112 [2024-12-09 09:54:49.045953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:42.112 [2024-12-09 09:54:49.045963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:42.112 [2024-12-09 09:54:49.047046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:42.112 [2024-12-09 09:54:49.047052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:43.047 [2024-12-09 09:54:49.812170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:43.047 [2024-12-09 09:54:49.836261] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:43.047 NULL1 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:43.047 Delay0 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=65478 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:43:43.047 09:54:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:43:43.047 [2024-12-09 09:54:50.062514] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:43:44.965 09:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:44.965 09:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.965 09:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 starting I/O failed: -6 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 starting I/O failed: -6 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 starting I/O failed: -6 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 starting I/O failed: -6 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 starting I/O failed: -6 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 starting I/O failed: -6 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 starting I/O failed: -6 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 starting I/O failed: -6 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 starting I/O failed: -6 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 starting I/O failed: -6 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 starting I/O failed: -6 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 starting I/O failed: -6 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 [2024-12-09 09:54:52.094403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a37e0 is same with the state(6) to be set 00:43:45.223 Write completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.223 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 starting I/O failed: -6 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 starting I/O failed: -6 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 starting I/O failed: -6 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 starting I/O failed: -6 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 starting I/O failed: -6 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 starting I/O failed: -6 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 starting I/O failed: -6 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 starting I/O failed: -6 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 starting I/O failed: -6 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 starting I/O failed: -6 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 starting I/O failed: -6 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 starting I/O failed: -6 00:43:45.224 [2024-12-09 09:54:52.095490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3e5800d4d0 is same with the state(6) to be set 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Write completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:45.224 Read completed with error (sct=0, sc=8) 00:43:46.161 [2024-12-09 09:54:53.074878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297aa0 is same with the state(6) to be set 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 [2024-12-09 09:54:53.091622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a5ea0 is same with the state(6) to be set 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 [2024-12-09 09:54:53.092751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3e5800d800 is same with the state(6) to be set 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 [2024-12-09 09:54:53.093419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3e5800d020 is same with the state(6) to be set 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 Write completed with error (sct=0, sc=8) 00:43:46.161 Read completed with error (sct=0, sc=8) 00:43:46.161 [2024-12-09 09:54:53.094186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a2a50 is same with the state(6) to be set 00:43:46.161 Initializing NVMe Controllers 00:43:46.161 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:43:46.161 Controller IO queue size 128, less than required. 00:43:46.162 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:46.162 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:43:46.162 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:43:46.162 Initialization complete. Launching workers. 00:43:46.162 ======================================================== 00:43:46.162 Latency(us) 00:43:46.162 Device Information : IOPS MiB/s Average min max 00:43:46.162 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.27 0.09 880987.82 499.00 1010893.35 00:43:46.162 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.30 0.08 909842.29 370.29 2000778.98 00:43:46.162 ======================================================== 00:43:46.162 Total : 348.57 0.17 895250.64 370.29 2000778.98 00:43:46.162 00:43:46.162 [2024-12-09 09:54:53.094654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1297aa0 (9): Bad file descriptor 00:43:46.162 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:43:46.162 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.162 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:43:46.162 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65478 00:43:46.162 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65478 00:43:46.726 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (65478) - No such process 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 65478 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 65478 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 65478 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:46.726 [2024-12-09 09:54:53.627969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=65529 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65529 00:43:46.726 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:46.984 [2024-12-09 09:54:53.829357] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:43:47.242 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:47.242 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65529 00:43:47.242 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:47.807 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:47.807 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65529 00:43:47.807 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:48.371 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:48.371 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65529 00:43:48.371 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:48.628 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:48.628 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65529 00:43:48.628 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:49.194 09:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:49.194 09:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65529 00:43:49.194 09:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:49.758 09:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:49.758 09:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65529 00:43:49.758 09:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:43:50.016 Initializing NVMe Controllers 00:43:50.016 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:43:50.016 Controller IO queue size 128, less than required. 00:43:50.016 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:50.016 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:43:50.016 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:43:50.016 Initialization complete. Launching workers. 00:43:50.016 ======================================================== 00:43:50.016 Latency(us) 00:43:50.016 Device Information : IOPS MiB/s Average min max 00:43:50.016 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003087.76 1000158.26 1010267.42 00:43:50.016 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004335.46 1000353.85 1042421.45 00:43:50.016 ======================================================== 00:43:50.016 Total : 256.00 0.12 1003711.61 1000158.26 1042421.45 00:43:50.016 00:43:50.275 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:43:50.275 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65529 00:43:50.275 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (65529) - No such process 00:43:50.275 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 65529 00:43:50.275 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:43:50.275 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:43:50.275 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:50.275 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:43:50.275 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:50.275 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:43:50.275 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:50.275 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:50.275 rmmod nvme_tcp 00:43:50.275 rmmod nvme_fabrics 00:43:50.275 rmmod nvme_keyring 00:43:50.275 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 65427 ']' 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 65427 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 65427 ']' 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 65427 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65427 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:50.533 killing process with pid 65427 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65427' 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 65427 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 65427 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:50.533 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:43:50.534 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:50.534 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:43:50.534 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:43:50.534 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:50.534 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:50.534 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:50.534 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:50.534 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:43:50.793 00:43:50.793 real 0m9.796s 00:43:50.793 user 0m29.128s 00:43:50.793 sys 0m1.532s 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:50.793 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:43:50.793 ************************************ 00:43:50.793 END TEST nvmf_delete_subsystem 00:43:50.793 ************************************ 00:43:51.052 09:54:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:43:51.052 09:54:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:51.052 09:54:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:51.052 09:54:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:43:51.052 ************************************ 00:43:51.052 START TEST nvmf_host_management 00:43:51.052 ************************************ 00:43:51.053 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:43:51.053 * Looking for test storage... 00:43:51.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:51.053 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:51.053 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:43:51.053 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:51.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:51.053 --rc genhtml_branch_coverage=1 00:43:51.053 --rc genhtml_function_coverage=1 00:43:51.053 --rc genhtml_legend=1 00:43:51.053 --rc geninfo_all_blocks=1 00:43:51.053 --rc geninfo_unexecuted_blocks=1 00:43:51.053 00:43:51.053 ' 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:51.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:51.053 --rc genhtml_branch_coverage=1 00:43:51.053 --rc genhtml_function_coverage=1 00:43:51.053 --rc genhtml_legend=1 00:43:51.053 --rc geninfo_all_blocks=1 00:43:51.053 --rc geninfo_unexecuted_blocks=1 00:43:51.053 00:43:51.053 ' 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:51.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:51.053 --rc genhtml_branch_coverage=1 00:43:51.053 --rc genhtml_function_coverage=1 00:43:51.053 --rc genhtml_legend=1 00:43:51.053 --rc geninfo_all_blocks=1 00:43:51.053 --rc geninfo_unexecuted_blocks=1 00:43:51.053 00:43:51.053 ' 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:51.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:51.053 --rc genhtml_branch_coverage=1 00:43:51.053 --rc genhtml_function_coverage=1 00:43:51.053 --rc genhtml_legend=1 00:43:51.053 --rc geninfo_all_blocks=1 00:43:51.053 --rc geninfo_unexecuted_blocks=1 00:43:51.053 00:43:51.053 ' 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:51.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:51.053 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:51.313 Cannot find device "nvmf_init_br" 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:51.313 Cannot find device "nvmf_init_br2" 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:51.313 Cannot find device "nvmf_tgt_br" 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:51.313 Cannot find device "nvmf_tgt_br2" 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:51.313 Cannot find device "nvmf_init_br" 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:51.313 Cannot find device "nvmf_init_br2" 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:51.313 Cannot find device "nvmf_tgt_br" 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:51.313 Cannot find device "nvmf_tgt_br2" 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:51.313 Cannot find device "nvmf_br" 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:51.313 Cannot find device "nvmf_init_if" 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:51.313 Cannot find device "nvmf_init_if2" 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:51.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:51.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:51.313 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:51.572 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:51.572 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:51.572 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:51.572 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:51.572 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:51.572 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:51.572 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:51.572 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:51.572 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:51.572 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:51.573 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:51.573 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:43:51.573 00:43:51.573 --- 10.0.0.3 ping statistics --- 00:43:51.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:51.573 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:51.573 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:51.573 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:43:51.573 00:43:51.573 --- 10.0.0.4 ping statistics --- 00:43:51.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:51.573 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:51.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:51.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:43:51.573 00:43:51.573 --- 10.0.0.1 ping statistics --- 00:43:51.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:51.573 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:51.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:51.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:43:51.573 00:43:51.573 --- 10.0.0.2 ping statistics --- 00:43:51.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:51.573 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65829 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65829 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65829 ']' 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:51.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:51.573 09:54:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:51.573 [2024-12-09 09:54:58.571019] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:51.573 [2024-12-09 09:54:58.571123] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:51.832 [2024-12-09 09:54:58.733714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:51.832 [2024-12-09 09:54:58.791174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:51.832 [2024-12-09 09:54:58.791228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:51.832 [2024-12-09 09:54:58.791235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:51.832 [2024-12-09 09:54:58.791241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:51.832 [2024-12-09 09:54:58.791245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:51.832 [2024-12-09 09:54:58.792144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:51.832 [2024-12-09 09:54:58.792239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:51.832 [2024-12-09 09:54:58.792427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:51.832 [2024-12-09 09:54:58.792430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:52.792 [2024-12-09 09:54:59.587528] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:52.792 Malloc0 00:43:52.792 [2024-12-09 09:54:59.675301] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:52.792 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65902 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65902 /var/tmp/bdevperf.sock 00:43:52.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65902 ']' 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:52.793 { 00:43:52.793 "params": { 00:43:52.793 "name": "Nvme$subsystem", 00:43:52.793 "trtype": "$TEST_TRANSPORT", 00:43:52.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:52.793 "adrfam": "ipv4", 00:43:52.793 "trsvcid": "$NVMF_PORT", 00:43:52.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:52.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:52.793 "hdgst": ${hdgst:-false}, 00:43:52.793 "ddgst": ${ddgst:-false} 00:43:52.793 }, 00:43:52.793 "method": "bdev_nvme_attach_controller" 00:43:52.793 } 00:43:52.793 EOF 00:43:52.793 )") 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:43:52.793 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:52.793 "params": { 00:43:52.793 "name": "Nvme0", 00:43:52.793 "trtype": "tcp", 00:43:52.793 "traddr": "10.0.0.3", 00:43:52.793 "adrfam": "ipv4", 00:43:52.793 "trsvcid": "4420", 00:43:52.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:52.793 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:52.793 "hdgst": false, 00:43:52.793 "ddgst": false 00:43:52.793 }, 00:43:52.793 "method": "bdev_nvme_attach_controller" 00:43:52.793 }' 00:43:52.793 [2024-12-09 09:54:59.798825] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:52.793 [2024-12-09 09:54:59.798906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65902 ] 00:43:53.051 [2024-12-09 09:54:59.943188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:53.051 [2024-12-09 09:55:00.016699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:53.352 Running I/O for 10 seconds... 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1027 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:53.924 [2024-12-09 09:55:00.890349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d6e20 is same with the state(6) to be set 00:43:53.924 [2024-12-09 09:55:00.890423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d6e20 is same with the state(6) to be set 00:43:53.924 [2024-12-09 09:55:00.890433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d6e20 is same with the state(6) to be set 00:43:53.924 [2024-12-09 09:55:00.894992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:53.924 [2024-12-09 09:55:00.895035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.924 [2024-12-09 09:55:00.895047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:53.924 [2024-12-09 09:55:00.895054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.924 [2024-12-09 09:55:00.895061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:53.924 [2024-12-09 09:55:00.895068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.924 [2024-12-09 09:55:00.895076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:53.924 [2024-12-09 09:55:00.895082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.924 [2024-12-09 09:55:00.895089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7130 is same with the state(6) to be set 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.924 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:53.924 [2024-12-09 09:55:00.901944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.924 [2024-12-09 09:55:00.901985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.924 [2024-12-09 09:55:00.902003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.924 [2024-12-09 09:55:00.902011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.924 [2024-12-09 09:55:00.902020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.925 [2024-12-09 09:55:00.902672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.925 [2024-12-09 09:55:00.902679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.902988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.902997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.903006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.903012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.903023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.903032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.903041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.903051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.903060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.903069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.903080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.903088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.903099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.903108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.903119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.926 [2024-12-09 09:55:00.903126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.926 [2024-12-09 09:55:00.904311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:43:53.926 task offset: 16384 on job bdev=Nvme0n1 fails 00:43:53.926 00:43:53.926 Latency(us) 00:43:53.926 [2024-12-09T09:55:00.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:53.926 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:43:53.926 Job: Nvme0n1 ended in about 0.71 seconds with error 00:43:53.926 Verification LBA range: start 0x0 length 0x400 00:43:53.926 Nvme0n1 : 0.71 1629.19 101.82 90.51 0.00 36401.80 1667.02 41668.30 00:43:53.926 [2024-12-09T09:55:00.970Z] =================================================================================================================== 00:43:53.926 [2024-12-09T09:55:00.970Z] Total : 1629.19 101.82 90.51 0.00 36401.80 1667.02 41668.30 00:43:53.926 [2024-12-09 09:55:00.906620] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:53.926 [2024-12-09 09:55:00.906648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f7130 (9): Bad file descriptor 00:43:53.926 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.926 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:43:53.926 [2024-12-09 09:55:00.915232] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65902 00:43:55.306 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65902) - No such process 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:55.306 { 00:43:55.306 "params": { 00:43:55.306 "name": "Nvme$subsystem", 00:43:55.306 "trtype": "$TEST_TRANSPORT", 00:43:55.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:55.306 "adrfam": "ipv4", 00:43:55.306 "trsvcid": "$NVMF_PORT", 00:43:55.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:55.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:55.306 "hdgst": ${hdgst:-false}, 00:43:55.306 "ddgst": ${ddgst:-false} 00:43:55.306 }, 00:43:55.306 "method": "bdev_nvme_attach_controller" 00:43:55.306 } 00:43:55.306 EOF 00:43:55.306 )") 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:43:55.306 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:55.306 "params": { 00:43:55.306 "name": "Nvme0", 00:43:55.306 "trtype": "tcp", 00:43:55.306 "traddr": "10.0.0.3", 00:43:55.306 "adrfam": "ipv4", 00:43:55.306 "trsvcid": "4420", 00:43:55.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:55.306 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:55.306 "hdgst": false, 00:43:55.306 "ddgst": false 00:43:55.306 }, 00:43:55.306 "method": "bdev_nvme_attach_controller" 00:43:55.306 }' 00:43:55.306 [2024-12-09 09:55:01.956423] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:55.306 [2024-12-09 09:55:01.956519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65952 ] 00:43:55.306 [2024-12-09 09:55:02.106905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:55.306 [2024-12-09 09:55:02.162126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:55.306 Running I/O for 1 seconds... 00:43:56.683 1855.00 IOPS, 115.94 MiB/s 00:43:56.683 Latency(us) 00:43:56.683 [2024-12-09T09:55:03.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:56.683 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:43:56.683 Verification LBA range: start 0x0 length 0x400 00:43:56.683 Nvme0n1 : 1.03 1872.30 117.02 0.00 0.00 33581.26 5179.92 29992.02 00:43:56.683 [2024-12-09T09:55:03.727Z] =================================================================================================================== 00:43:56.683 [2024-12-09T09:55:03.727Z] Total : 1872.30 117.02 0.00 0.00 33581.26 5179.92 29992.02 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:56.683 rmmod nvme_tcp 00:43:56.683 rmmod nvme_fabrics 00:43:56.683 rmmod nvme_keyring 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65829 ']' 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65829 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 65829 ']' 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 65829 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65829 00:43:56.683 killing process with pid 65829 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65829' 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 65829 00:43:56.683 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 65829 00:43:56.942 [2024-12-09 09:55:03.845219] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:56.942 09:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:57.200 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:57.200 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:57.200 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:57.200 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:57.200 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:57.201 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:57.201 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:57.201 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:57.201 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:43:57.201 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:43:57.201 00:43:57.201 real 0m6.302s 00:43:57.201 user 0m23.318s 00:43:57.201 sys 0m1.520s 00:43:57.201 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:57.201 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:57.201 ************************************ 00:43:57.201 END TEST nvmf_host_management 00:43:57.201 ************************************ 00:43:57.201 09:55:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:43:57.201 09:55:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:57.201 09:55:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:57.201 09:55:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:43:57.201 ************************************ 00:43:57.201 START TEST nvmf_lvol 00:43:57.201 ************************************ 00:43:57.201 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:43:57.461 * Looking for test storage... 00:43:57.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:43:57.461 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:57.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:57.462 --rc genhtml_branch_coverage=1 00:43:57.462 --rc genhtml_function_coverage=1 00:43:57.462 --rc genhtml_legend=1 00:43:57.462 --rc geninfo_all_blocks=1 00:43:57.462 --rc geninfo_unexecuted_blocks=1 00:43:57.462 00:43:57.462 ' 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:57.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:57.462 --rc genhtml_branch_coverage=1 00:43:57.462 --rc genhtml_function_coverage=1 00:43:57.462 --rc genhtml_legend=1 00:43:57.462 --rc geninfo_all_blocks=1 00:43:57.462 --rc geninfo_unexecuted_blocks=1 00:43:57.462 00:43:57.462 ' 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:57.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:57.462 --rc genhtml_branch_coverage=1 00:43:57.462 --rc genhtml_function_coverage=1 00:43:57.462 --rc genhtml_legend=1 00:43:57.462 --rc geninfo_all_blocks=1 00:43:57.462 --rc geninfo_unexecuted_blocks=1 00:43:57.462 00:43:57.462 ' 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:57.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:57.462 --rc genhtml_branch_coverage=1 00:43:57.462 --rc genhtml_function_coverage=1 00:43:57.462 --rc genhtml_legend=1 00:43:57.462 --rc geninfo_all_blocks=1 00:43:57.462 --rc geninfo_unexecuted_blocks=1 00:43:57.462 00:43:57.462 ' 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:57.462 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:57.462 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:57.463 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:57.463 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:57.463 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:57.722 Cannot find device "nvmf_init_br" 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:57.722 Cannot find device "nvmf_init_br2" 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:57.722 Cannot find device "nvmf_tgt_br" 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:57.722 Cannot find device "nvmf_tgt_br2" 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:57.722 Cannot find device "nvmf_init_br" 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:57.722 Cannot find device "nvmf_init_br2" 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:57.722 Cannot find device "nvmf_tgt_br" 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:57.722 Cannot find device "nvmf_tgt_br2" 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:57.722 Cannot find device "nvmf_br" 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:57.722 Cannot find device "nvmf_init_if" 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:57.722 Cannot find device "nvmf_init_if2" 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:57.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:57.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:57.722 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:58.039 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:58.040 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:58.040 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:43:58.040 00:43:58.040 --- 10.0.0.3 ping statistics --- 00:43:58.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:58.040 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:58.040 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:58.040 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:43:58.040 00:43:58.040 --- 10.0.0.4 ping statistics --- 00:43:58.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:58.040 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:58.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:58.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:43:58.040 00:43:58.040 --- 10.0.0.1 ping statistics --- 00:43:58.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:58.040 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:58.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:58.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:43:58.040 00:43:58.040 --- 10.0.0.2 ping statistics --- 00:43:58.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:58.040 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=66220 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 66220 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 66220 ']' 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:58.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:58.040 09:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:58.040 [2024-12-09 09:55:04.966550] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:43:58.040 [2024-12-09 09:55:04.966659] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:58.312 [2024-12-09 09:55:05.112148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:58.312 [2024-12-09 09:55:05.168891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:58.312 [2024-12-09 09:55:05.168942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:58.313 [2024-12-09 09:55:05.168949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:58.313 [2024-12-09 09:55:05.168954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:58.313 [2024-12-09 09:55:05.168958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:58.313 [2024-12-09 09:55:05.169929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:58.313 [2024-12-09 09:55:05.169997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:58.313 [2024-12-09 09:55:05.170001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:58.877 09:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:58.877 09:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:43:58.877 09:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:58.877 09:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:58.877 09:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:59.136 09:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:59.136 09:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:59.136 [2024-12-09 09:55:06.144107] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:59.136 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:59.394 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:43:59.394 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:59.652 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:43:59.652 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:43:59.910 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:44:00.168 09:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=882f7966-cf8c-465a-bc11-ceb90325f7a9 00:44:00.168 09:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 882f7966-cf8c-465a-bc11-ceb90325f7a9 lvol 20 00:44:00.427 09:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e964a800-fb2f-40ff-ae9e-20ef51fb2e37 00:44:00.427 09:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:44:00.687 09:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e964a800-fb2f-40ff-ae9e-20ef51fb2e37 00:44:00.947 09:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:44:01.207 [2024-12-09 09:55:08.059823] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:01.207 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:44:01.466 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=66362 00:44:01.466 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:44:01.466 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:44:02.470 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot e964a800-fb2f-40ff-ae9e-20ef51fb2e37 MY_SNAPSHOT 00:44:02.729 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1bb9194d-6155-46cf-8d22-8bb9ad1dcfaa 00:44:02.729 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize e964a800-fb2f-40ff-ae9e-20ef51fb2e37 30 00:44:02.988 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1bb9194d-6155-46cf-8d22-8bb9ad1dcfaa MY_CLONE 00:44:03.247 09:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=89295f78-5223-405f-95b4-49c138531262 00:44:03.247 09:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 89295f78-5223-405f-95b4-49c138531262 00:44:03.856 09:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 66362 00:44:11.973 Initializing NVMe Controllers 00:44:11.973 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:44:11.973 Controller IO queue size 128, less than required. 00:44:11.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:44:11.973 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:44:11.973 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:44:11.973 Initialization complete. Launching workers. 00:44:11.973 ======================================================== 00:44:11.973 Latency(us) 00:44:11.973 Device Information : IOPS MiB/s Average min max 00:44:11.974 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11629.90 45.43 11013.06 1963.04 54178.39 00:44:11.974 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11793.70 46.07 10860.13 4848.97 62579.40 00:44:11.974 ======================================================== 00:44:11.974 Total : 23423.60 91.50 10936.06 1963.04 62579.40 00:44:11.974 00:44:11.974 09:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:11.974 09:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e964a800-fb2f-40ff-ae9e-20ef51fb2e37 00:44:12.232 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 882f7966-cf8c-465a-bc11-ceb90325f7a9 00:44:12.491 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:44:12.491 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:12.492 rmmod nvme_tcp 00:44:12.492 rmmod nvme_fabrics 00:44:12.492 rmmod nvme_keyring 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 66220 ']' 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 66220 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 66220 ']' 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 66220 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66220 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:12.492 killing process with pid 66220 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66220' 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 66220 00:44:12.492 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 66220 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:44:12.752 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:44:13.012 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:44:13.012 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:13.012 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:13.012 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:44:13.012 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:13.012 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:13.012 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:13.012 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:44:13.012 00:44:13.012 real 0m15.711s 00:44:13.012 user 1m4.834s 00:44:13.012 sys 0m3.496s 00:44:13.013 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:13.013 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:44:13.013 ************************************ 00:44:13.013 END TEST nvmf_lvol 00:44:13.013 ************************************ 00:44:13.013 09:55:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:44:13.013 09:55:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:13.013 09:55:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:13.013 09:55:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:44:13.013 ************************************ 00:44:13.013 START TEST nvmf_lvs_grow 00:44:13.013 ************************************ 00:44:13.013 09:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:44:13.273 * Looking for test storage... 00:44:13.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:44:13.273 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:13.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:13.274 --rc genhtml_branch_coverage=1 00:44:13.274 --rc genhtml_function_coverage=1 00:44:13.274 --rc genhtml_legend=1 00:44:13.274 --rc geninfo_all_blocks=1 00:44:13.274 --rc geninfo_unexecuted_blocks=1 00:44:13.274 00:44:13.274 ' 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:13.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:13.274 --rc genhtml_branch_coverage=1 00:44:13.274 --rc genhtml_function_coverage=1 00:44:13.274 --rc genhtml_legend=1 00:44:13.274 --rc geninfo_all_blocks=1 00:44:13.274 --rc geninfo_unexecuted_blocks=1 00:44:13.274 00:44:13.274 ' 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:13.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:13.274 --rc genhtml_branch_coverage=1 00:44:13.274 --rc genhtml_function_coverage=1 00:44:13.274 --rc genhtml_legend=1 00:44:13.274 --rc geninfo_all_blocks=1 00:44:13.274 --rc geninfo_unexecuted_blocks=1 00:44:13.274 00:44:13.274 ' 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:13.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:13.274 --rc genhtml_branch_coverage=1 00:44:13.274 --rc genhtml_function_coverage=1 00:44:13.274 --rc genhtml_legend=1 00:44:13.274 --rc geninfo_all_blocks=1 00:44:13.274 --rc geninfo_unexecuted_blocks=1 00:44:13.274 00:44:13.274 ' 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:13.274 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:13.274 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:44:13.275 Cannot find device "nvmf_init_br" 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:44:13.275 Cannot find device "nvmf_init_br2" 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:44:13.275 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:44:13.535 Cannot find device "nvmf_tgt_br" 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:44:13.535 Cannot find device "nvmf_tgt_br2" 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:44:13.535 Cannot find device "nvmf_init_br" 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:44:13.535 Cannot find device "nvmf_init_br2" 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:44:13.535 Cannot find device "nvmf_tgt_br" 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:44:13.535 Cannot find device "nvmf_tgt_br2" 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:44:13.535 Cannot find device "nvmf_br" 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:44:13.535 Cannot find device "nvmf_init_if" 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:44:13.535 Cannot find device "nvmf_init_if2" 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:13.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:13.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:44:13.535 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:44:13.796 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:13.796 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.157 ms 00:44:13.796 00:44:13.796 --- 10.0.0.3 ping statistics --- 00:44:13.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:13.796 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:44:13.796 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:44:13.796 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:44:13.796 00:44:13.796 --- 10.0.0.4 ping statistics --- 00:44:13.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:13.796 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:13.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:13.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:44:13.796 00:44:13.796 --- 10.0.0.1 ping statistics --- 00:44:13.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:13.796 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:44:13.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:13.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:44:13.796 00:44:13.796 --- 10.0.0.2 ping statistics --- 00:44:13.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:13.796 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66776 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66776 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 66776 ']' 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:13.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:13.796 09:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:44:13.796 [2024-12-09 09:55:20.813115] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:13.797 [2024-12-09 09:55:20.813183] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:14.101 [2024-12-09 09:55:20.965623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:14.101 [2024-12-09 09:55:21.017249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:14.101 [2024-12-09 09:55:21.017305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:14.101 [2024-12-09 09:55:21.017313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:14.101 [2024-12-09 09:55:21.017318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:14.101 [2024-12-09 09:55:21.017323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:14.101 [2024-12-09 09:55:21.017604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:14.671 09:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:14.671 09:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:44:14.671 09:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:14.671 09:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:14.671 09:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:44:14.930 09:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:14.930 09:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:44:15.190 [2024-12-09 09:55:21.972945] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:15.190 09:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:44:15.190 09:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:15.190 09:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:15.190 09:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:44:15.190 ************************************ 00:44:15.190 START TEST lvs_grow_clean 00:44:15.190 ************************************ 00:44:15.190 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:44:15.190 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:44:15.190 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:44:15.190 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:44:15.190 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:44:15.190 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:44:15.190 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:44:15.190 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:44:15.190 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:44:15.190 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:15.450 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:44:15.450 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:44:15.710 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 00:44:15.710 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:44:15.710 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 00:44:15.971 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:44:15.971 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:44:15.971 09:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 lvol 150 00:44:16.231 09:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0ab84772-916b-40ab-96b0-1ce124abe9c4 00:44:16.231 09:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:44:16.231 09:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:44:16.491 [2024-12-09 09:55:23.323158] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:44:16.491 [2024-12-09 09:55:23.323231] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:44:16.491 true 00:44:16.491 09:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 00:44:16.491 09:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:44:16.750 09:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:44:16.750 09:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:44:17.009 09:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0ab84772-916b-40ab-96b0-1ce124abe9c4 00:44:17.267 09:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:44:17.267 [2024-12-09 09:55:24.285810] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:17.526 09:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:44:17.526 09:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:44:17.526 09:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66943 00:44:17.526 09:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:17.526 09:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66943 /var/tmp/bdevperf.sock 00:44:17.526 09:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66943 ']' 00:44:17.526 09:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:17.526 09:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:17.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:17.526 09:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:17.526 09:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:17.526 09:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:44:17.785 [2024-12-09 09:55:24.571841] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:17.785 [2024-12-09 09:55:24.571909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66943 ] 00:44:17.785 [2024-12-09 09:55:24.712927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:17.785 [2024-12-09 09:55:24.778700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:18.721 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:18.721 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:44:18.721 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:44:18.979 Nvme0n1 00:44:18.979 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:44:19.238 [ 00:44:19.238 { 00:44:19.238 "aliases": [ 00:44:19.238 "0ab84772-916b-40ab-96b0-1ce124abe9c4" 00:44:19.238 ], 00:44:19.238 "assigned_rate_limits": { 00:44:19.238 "r_mbytes_per_sec": 0, 00:44:19.238 "rw_ios_per_sec": 0, 00:44:19.238 "rw_mbytes_per_sec": 0, 00:44:19.238 "w_mbytes_per_sec": 0 00:44:19.238 }, 00:44:19.238 "block_size": 4096, 00:44:19.238 "claimed": false, 00:44:19.238 "driver_specific": { 00:44:19.238 "mp_policy": "active_passive", 00:44:19.238 "nvme": [ 00:44:19.238 { 00:44:19.238 "ctrlr_data": { 00:44:19.238 "ana_reporting": false, 00:44:19.238 "cntlid": 1, 00:44:19.238 "firmware_revision": "25.01", 00:44:19.238 "model_number": "SPDK bdev Controller", 00:44:19.238 "multi_ctrlr": true, 00:44:19.238 "oacs": { 00:44:19.238 "firmware": 0, 00:44:19.238 "format": 0, 00:44:19.238 "ns_manage": 0, 00:44:19.238 "security": 0 00:44:19.238 }, 00:44:19.238 "serial_number": "SPDK0", 00:44:19.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:19.238 "vendor_id": "0x8086" 00:44:19.238 }, 00:44:19.238 "ns_data": { 00:44:19.238 "can_share": true, 00:44:19.238 "id": 1 00:44:19.238 }, 00:44:19.238 "trid": { 00:44:19.238 "adrfam": "IPv4", 00:44:19.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:19.238 "traddr": "10.0.0.3", 00:44:19.238 "trsvcid": "4420", 00:44:19.238 "trtype": "TCP" 00:44:19.238 }, 00:44:19.238 "vs": { 00:44:19.238 "nvme_version": "1.3" 00:44:19.238 } 00:44:19.238 } 00:44:19.238 ] 00:44:19.238 }, 00:44:19.238 "memory_domains": [ 00:44:19.238 { 00:44:19.238 "dma_device_id": "system", 00:44:19.238 "dma_device_type": 1 00:44:19.238 } 00:44:19.238 ], 00:44:19.238 "name": "Nvme0n1", 00:44:19.238 "num_blocks": 38912, 00:44:19.238 "numa_id": -1, 00:44:19.238 "product_name": "NVMe disk", 00:44:19.238 "supported_io_types": { 00:44:19.238 "abort": true, 00:44:19.238 "compare": true, 00:44:19.238 "compare_and_write": true, 00:44:19.238 "copy": true, 00:44:19.238 "flush": true, 00:44:19.238 "get_zone_info": false, 00:44:19.238 "nvme_admin": true, 00:44:19.238 "nvme_io": true, 00:44:19.238 "nvme_io_md": false, 00:44:19.238 "nvme_iov_md": false, 00:44:19.238 "read": true, 00:44:19.238 "reset": true, 00:44:19.238 "seek_data": false, 00:44:19.238 "seek_hole": false, 00:44:19.238 "unmap": true, 00:44:19.238 "write": true, 00:44:19.238 "write_zeroes": true, 00:44:19.238 "zcopy": false, 00:44:19.238 "zone_append": false, 00:44:19.238 "zone_management": false 00:44:19.238 }, 00:44:19.238 "uuid": "0ab84772-916b-40ab-96b0-1ce124abe9c4", 00:44:19.238 "zoned": false 00:44:19.238 } 00:44:19.238 ] 00:44:19.238 09:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:19.238 09:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66995 00:44:19.238 09:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:44:19.238 Running I/O for 10 seconds... 00:44:20.613 Latency(us) 00:44:20.613 [2024-12-09T09:55:27.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:20.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:20.613 Nvme0n1 : 1.00 9977.00 38.97 0.00 0.00 0.00 0.00 0.00 00:44:20.613 [2024-12-09T09:55:27.657Z] =================================================================================================================== 00:44:20.613 [2024-12-09T09:55:27.657Z] Total : 9977.00 38.97 0.00 0.00 0.00 0.00 0.00 00:44:20.613 00:44:21.180 09:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 00:44:21.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:21.438 Nvme0n1 : 2.00 10069.50 39.33 0.00 0.00 0.00 0.00 0.00 00:44:21.438 [2024-12-09T09:55:28.482Z] =================================================================================================================== 00:44:21.438 [2024-12-09T09:55:28.482Z] Total : 10069.50 39.33 0.00 0.00 0.00 0.00 0.00 00:44:21.438 00:44:21.438 true 00:44:21.438 09:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 00:44:21.438 09:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:44:22.005 09:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:44:22.005 09:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:44:22.005 09:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66995 00:44:22.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:22.264 Nvme0n1 : 3.00 10000.00 39.06 0.00 0.00 0.00 0.00 0.00 00:44:22.264 [2024-12-09T09:55:29.308Z] =================================================================================================================== 00:44:22.264 [2024-12-09T09:55:29.308Z] Total : 10000.00 39.06 0.00 0.00 0.00 0.00 0.00 00:44:22.264 00:44:23.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:23.639 Nvme0n1 : 4.00 9574.00 37.40 0.00 0.00 0.00 0.00 0.00 00:44:23.639 [2024-12-09T09:55:30.683Z] =================================================================================================================== 00:44:23.639 [2024-12-09T09:55:30.683Z] Total : 9574.00 37.40 0.00 0.00 0.00 0.00 0.00 00:44:23.639 00:44:24.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:24.573 Nvme0n1 : 5.00 9572.40 37.39 0.00 0.00 0.00 0.00 0.00 00:44:24.573 [2024-12-09T09:55:31.617Z] =================================================================================================================== 00:44:24.573 [2024-12-09T09:55:31.617Z] Total : 9572.40 37.39 0.00 0.00 0.00 0.00 0.00 00:44:24.573 00:44:25.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:25.510 Nvme0n1 : 6.00 9575.33 37.40 0.00 0.00 0.00 0.00 0.00 00:44:25.510 [2024-12-09T09:55:32.554Z] =================================================================================================================== 00:44:25.510 [2024-12-09T09:55:32.554Z] Total : 9575.33 37.40 0.00 0.00 0.00 0.00 0.00 00:44:25.510 00:44:26.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:26.446 Nvme0n1 : 7.00 9595.71 37.48 0.00 0.00 0.00 0.00 0.00 00:44:26.446 [2024-12-09T09:55:33.490Z] =================================================================================================================== 00:44:26.446 [2024-12-09T09:55:33.490Z] Total : 9595.71 37.48 0.00 0.00 0.00 0.00 0.00 00:44:26.446 00:44:27.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:27.383 Nvme0n1 : 8.00 9565.25 37.36 0.00 0.00 0.00 0.00 0.00 00:44:27.383 [2024-12-09T09:55:34.427Z] =================================================================================================================== 00:44:27.383 [2024-12-09T09:55:34.427Z] Total : 9565.25 37.36 0.00 0.00 0.00 0.00 0.00 00:44:27.383 00:44:28.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:28.331 Nvme0n1 : 9.00 9536.56 37.25 0.00 0.00 0.00 0.00 0.00 00:44:28.331 [2024-12-09T09:55:35.375Z] =================================================================================================================== 00:44:28.331 [2024-12-09T09:55:35.375Z] Total : 9536.56 37.25 0.00 0.00 0.00 0.00 0.00 00:44:28.331 00:44:29.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:29.277 Nvme0n1 : 10.00 9512.60 37.16 0.00 0.00 0.00 0.00 0.00 00:44:29.277 [2024-12-09T09:55:36.322Z] =================================================================================================================== 00:44:29.278 [2024-12-09T09:55:36.322Z] Total : 9512.60 37.16 0.00 0.00 0.00 0.00 0.00 00:44:29.278 00:44:29.278 00:44:29.278 Latency(us) 00:44:29.278 [2024-12-09T09:55:36.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:29.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:29.278 Nvme0n1 : 10.01 9516.22 37.17 0.00 0.00 13445.91 5695.05 173083.72 00:44:29.278 [2024-12-09T09:55:36.322Z] =================================================================================================================== 00:44:29.278 [2024-12-09T09:55:36.322Z] Total : 9516.22 37.17 0.00 0.00 13445.91 5695.05 173083.72 00:44:29.278 { 00:44:29.278 "results": [ 00:44:29.278 { 00:44:29.278 "job": "Nvme0n1", 00:44:29.278 "core_mask": "0x2", 00:44:29.278 "workload": "randwrite", 00:44:29.278 "status": "finished", 00:44:29.278 "queue_depth": 128, 00:44:29.278 "io_size": 4096, 00:44:29.278 "runtime": 10.009642, 00:44:29.278 "iops": 9516.22445637916, 00:44:29.278 "mibps": 37.17275178273109, 00:44:29.278 "io_failed": 0, 00:44:29.278 "io_timeout": 0, 00:44:29.278 "avg_latency_us": 13445.91185444607, 00:44:29.278 "min_latency_us": 5695.049781659389, 00:44:29.278 "max_latency_us": 173083.72401746726 00:44:29.278 } 00:44:29.278 ], 00:44:29.278 "core_count": 1 00:44:29.278 } 00:44:29.278 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66943 00:44:29.278 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66943 ']' 00:44:29.278 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66943 00:44:29.278 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:44:29.278 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:29.278 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66943 00:44:29.536 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:29.536 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:29.536 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66943' 00:44:29.536 killing process with pid 66943 00:44:29.536 Received shutdown signal, test time was about 10.000000 seconds 00:44:29.536 00:44:29.537 Latency(us) 00:44:29.537 [2024-12-09T09:55:36.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:29.537 [2024-12-09T09:55:36.581Z] =================================================================================================================== 00:44:29.537 [2024-12-09T09:55:36.581Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:29.537 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66943 00:44:29.537 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66943 00:44:29.537 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:44:29.795 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:30.053 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:44:30.053 09:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 00:44:30.311 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:44:30.311 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:44:30.311 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:30.570 [2024-12-09 09:55:37.422008] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:44:30.570 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 00:44:30.570 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:44:30.570 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 00:44:30.570 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:30.570 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:30.570 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:30.570 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:30.570 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:30.570 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:30.570 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:30.570 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:44:30.570 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 00:44:30.828 2024/12/09 09:55:37 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:44:30.828 request: 00:44:30.828 { 00:44:30.828 "method": "bdev_lvol_get_lvstores", 00:44:30.828 "params": { 00:44:30.828 "uuid": "acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93" 00:44:30.828 } 00:44:30.828 } 00:44:30.828 Got JSON-RPC error response 00:44:30.828 GoRPCClient: error on JSON-RPC call 00:44:30.828 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:44:30.828 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:30.828 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:30.828 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:30.828 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:31.087 aio_bdev 00:44:31.087 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0ab84772-916b-40ab-96b0-1ce124abe9c4 00:44:31.087 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=0ab84772-916b-40ab-96b0-1ce124abe9c4 00:44:31.087 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:31.087 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:44:31.087 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:31.087 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:31.087 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:31.346 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0ab84772-916b-40ab-96b0-1ce124abe9c4 -t 2000 00:44:31.604 [ 00:44:31.604 { 00:44:31.604 "aliases": [ 00:44:31.604 "lvs/lvol" 00:44:31.604 ], 00:44:31.604 "assigned_rate_limits": { 00:44:31.604 "r_mbytes_per_sec": 0, 00:44:31.605 "rw_ios_per_sec": 0, 00:44:31.605 "rw_mbytes_per_sec": 0, 00:44:31.605 "w_mbytes_per_sec": 0 00:44:31.605 }, 00:44:31.605 "block_size": 4096, 00:44:31.605 "claimed": false, 00:44:31.605 "driver_specific": { 00:44:31.605 "lvol": { 00:44:31.605 "base_bdev": "aio_bdev", 00:44:31.605 "clone": false, 00:44:31.605 "esnap_clone": false, 00:44:31.605 "lvol_store_uuid": "acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93", 00:44:31.605 "num_allocated_clusters": 38, 00:44:31.605 "snapshot": false, 00:44:31.605 "thin_provision": false 00:44:31.605 } 00:44:31.605 }, 00:44:31.605 "name": "0ab84772-916b-40ab-96b0-1ce124abe9c4", 00:44:31.605 "num_blocks": 38912, 00:44:31.605 "product_name": "Logical Volume", 00:44:31.605 "supported_io_types": { 00:44:31.605 "abort": false, 00:44:31.605 "compare": false, 00:44:31.605 "compare_and_write": false, 00:44:31.605 "copy": false, 00:44:31.605 "flush": false, 00:44:31.605 "get_zone_info": false, 00:44:31.605 "nvme_admin": false, 00:44:31.605 "nvme_io": false, 00:44:31.605 "nvme_io_md": false, 00:44:31.605 "nvme_iov_md": false, 00:44:31.605 "read": true, 00:44:31.605 "reset": true, 00:44:31.605 "seek_data": true, 00:44:31.605 "seek_hole": true, 00:44:31.605 "unmap": true, 00:44:31.605 "write": true, 00:44:31.605 "write_zeroes": true, 00:44:31.605 "zcopy": false, 00:44:31.605 "zone_append": false, 00:44:31.605 "zone_management": false 00:44:31.605 }, 00:44:31.605 "uuid": "0ab84772-916b-40ab-96b0-1ce124abe9c4", 00:44:31.605 "zoned": false 00:44:31.605 } 00:44:31.605 ] 00:44:31.605 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:44:31.605 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 00:44:31.605 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:44:31.863 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:44:31.863 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 00:44:31.863 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:44:32.121 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:44:32.121 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0ab84772-916b-40ab-96b0-1ce124abe9c4 00:44:32.379 09:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u acf81e5c-cc23-4ed3-9e2e-81e8af1c6f93 00:44:32.703 09:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:32.703 09:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:44:33.278 00:44:33.278 real 0m18.025s 00:44:33.278 user 0m17.502s 00:44:33.278 sys 0m2.118s 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:33.278 ************************************ 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:44:33.278 END TEST lvs_grow_clean 00:44:33.278 ************************************ 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:44:33.278 ************************************ 00:44:33.278 START TEST lvs_grow_dirty 00:44:33.278 ************************************ 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:44:33.278 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:33.535 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:44:33.535 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:44:33.793 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:33.793 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:44:33.793 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:33.793 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:44:33.793 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:44:33.793 09:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 lvol 150 00:44:34.359 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f480e75b-a9ea-498f-b863-bcdc3ebde34b 00:44:34.359 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:44:34.359 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:44:34.359 [2024-12-09 09:55:41.358232] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:44:34.359 [2024-12-09 09:55:41.358304] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:44:34.359 true 00:44:34.359 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:44:34.359 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:34.617 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:44:34.617 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:44:34.875 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f480e75b-a9ea-498f-b863-bcdc3ebde34b 00:44:35.134 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:44:35.393 [2024-12-09 09:55:42.276912] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:35.393 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:44:35.651 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67393 00:44:35.652 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:44:35.652 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:35.652 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67393 /var/tmp/bdevperf.sock 00:44:35.652 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67393 ']' 00:44:35.652 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:35.652 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:35.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:35.652 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:35.652 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:35.652 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:35.652 [2024-12-09 09:55:42.574198] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:35.652 [2024-12-09 09:55:42.574266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67393 ] 00:44:35.910 [2024-12-09 09:55:42.721822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:35.910 [2024-12-09 09:55:42.778249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:36.477 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:36.477 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:44:36.477 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:44:36.735 Nvme0n1 00:44:36.735 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:44:36.995 [ 00:44:36.995 { 00:44:36.995 "aliases": [ 00:44:36.995 "f480e75b-a9ea-498f-b863-bcdc3ebde34b" 00:44:36.995 ], 00:44:36.995 "assigned_rate_limits": { 00:44:36.995 "r_mbytes_per_sec": 0, 00:44:36.995 "rw_ios_per_sec": 0, 00:44:36.995 "rw_mbytes_per_sec": 0, 00:44:36.995 "w_mbytes_per_sec": 0 00:44:36.995 }, 00:44:36.995 "block_size": 4096, 00:44:36.995 "claimed": false, 00:44:36.995 "driver_specific": { 00:44:36.995 "mp_policy": "active_passive", 00:44:36.995 "nvme": [ 00:44:36.995 { 00:44:36.995 "ctrlr_data": { 00:44:36.995 "ana_reporting": false, 00:44:36.995 "cntlid": 1, 00:44:36.995 "firmware_revision": "25.01", 00:44:36.995 "model_number": "SPDK bdev Controller", 00:44:36.995 "multi_ctrlr": true, 00:44:36.995 "oacs": { 00:44:36.995 "firmware": 0, 00:44:36.995 "format": 0, 00:44:36.995 "ns_manage": 0, 00:44:36.995 "security": 0 00:44:36.995 }, 00:44:36.995 "serial_number": "SPDK0", 00:44:36.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:36.995 "vendor_id": "0x8086" 00:44:36.995 }, 00:44:36.995 "ns_data": { 00:44:36.995 "can_share": true, 00:44:36.995 "id": 1 00:44:36.995 }, 00:44:36.995 "trid": { 00:44:36.995 "adrfam": "IPv4", 00:44:36.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:36.995 "traddr": "10.0.0.3", 00:44:36.995 "trsvcid": "4420", 00:44:36.995 "trtype": "TCP" 00:44:36.995 }, 00:44:36.995 "vs": { 00:44:36.995 "nvme_version": "1.3" 00:44:36.995 } 00:44:36.995 } 00:44:36.995 ] 00:44:36.995 }, 00:44:36.995 "memory_domains": [ 00:44:36.995 { 00:44:36.995 "dma_device_id": "system", 00:44:36.995 "dma_device_type": 1 00:44:36.995 } 00:44:36.995 ], 00:44:36.995 "name": "Nvme0n1", 00:44:36.995 "num_blocks": 38912, 00:44:36.995 "numa_id": -1, 00:44:36.995 "product_name": "NVMe disk", 00:44:36.995 "supported_io_types": { 00:44:36.995 "abort": true, 00:44:36.995 "compare": true, 00:44:36.995 "compare_and_write": true, 00:44:36.995 "copy": true, 00:44:36.995 "flush": true, 00:44:36.995 "get_zone_info": false, 00:44:36.995 "nvme_admin": true, 00:44:36.995 "nvme_io": true, 00:44:36.995 "nvme_io_md": false, 00:44:36.995 "nvme_iov_md": false, 00:44:36.995 "read": true, 00:44:36.995 "reset": true, 00:44:36.995 "seek_data": false, 00:44:36.995 "seek_hole": false, 00:44:36.995 "unmap": true, 00:44:36.995 "write": true, 00:44:36.995 "write_zeroes": true, 00:44:36.995 "zcopy": false, 00:44:36.995 "zone_append": false, 00:44:36.995 "zone_management": false 00:44:36.995 }, 00:44:36.995 "uuid": "f480e75b-a9ea-498f-b863-bcdc3ebde34b", 00:44:36.995 "zoned": false 00:44:36.995 } 00:44:36.995 ] 00:44:36.995 09:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67436 00:44:36.995 09:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:36.995 09:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:44:37.254 Running I/O for 10 seconds... 00:44:38.189 Latency(us) 00:44:38.189 [2024-12-09T09:55:45.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:38.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:38.189 Nvme0n1 : 1.00 10736.00 41.94 0.00 0.00 0.00 0.00 0.00 00:44:38.189 [2024-12-09T09:55:45.233Z] =================================================================================================================== 00:44:38.189 [2024-12-09T09:55:45.233Z] Total : 10736.00 41.94 0.00 0.00 0.00 0.00 0.00 00:44:38.189 00:44:39.121 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:39.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:39.121 Nvme0n1 : 2.00 10567.50 41.28 0.00 0.00 0.00 0.00 0.00 00:44:39.121 [2024-12-09T09:55:46.165Z] =================================================================================================================== 00:44:39.121 [2024-12-09T09:55:46.165Z] Total : 10567.50 41.28 0.00 0.00 0.00 0.00 0.00 00:44:39.121 00:44:39.380 true 00:44:39.380 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:44:39.380 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:39.638 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:44:39.638 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:44:39.638 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 67436 00:44:40.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:40.203 Nvme0n1 : 3.00 10393.67 40.60 0.00 0.00 0.00 0.00 0.00 00:44:40.203 [2024-12-09T09:55:47.247Z] =================================================================================================================== 00:44:40.203 [2024-12-09T09:55:47.247Z] Total : 10393.67 40.60 0.00 0.00 0.00 0.00 0.00 00:44:40.203 00:44:41.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:41.140 Nvme0n1 : 4.00 10285.25 40.18 0.00 0.00 0.00 0.00 0.00 00:44:41.140 [2024-12-09T09:55:48.184Z] =================================================================================================================== 00:44:41.140 [2024-12-09T09:55:48.184Z] Total : 10285.25 40.18 0.00 0.00 0.00 0.00 0.00 00:44:41.140 00:44:42.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:42.077 Nvme0n1 : 5.00 9679.20 37.81 0.00 0.00 0.00 0.00 0.00 00:44:42.077 [2024-12-09T09:55:49.121Z] =================================================================================================================== 00:44:42.077 [2024-12-09T09:55:49.121Z] Total : 9679.20 37.81 0.00 0.00 0.00 0.00 0.00 00:44:42.077 00:44:43.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:43.454 Nvme0n1 : 6.00 8787.50 34.33 0.00 0.00 0.00 0.00 0.00 00:44:43.454 [2024-12-09T09:55:50.498Z] =================================================================================================================== 00:44:43.454 [2024-12-09T09:55:50.498Z] Total : 8787.50 34.33 0.00 0.00 0.00 0.00 0.00 00:44:43.454 00:44:44.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:44.390 Nvme0n1 : 7.00 8705.14 34.00 0.00 0.00 0.00 0.00 0.00 00:44:44.390 [2024-12-09T09:55:51.434Z] =================================================================================================================== 00:44:44.390 [2024-12-09T09:55:51.434Z] Total : 8705.14 34.00 0.00 0.00 0.00 0.00 0.00 00:44:44.390 00:44:45.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:45.327 Nvme0n1 : 8.00 8848.25 34.56 0.00 0.00 0.00 0.00 0.00 00:44:45.327 [2024-12-09T09:55:52.371Z] =================================================================================================================== 00:44:45.327 [2024-12-09T09:55:52.371Z] Total : 8848.25 34.56 0.00 0.00 0.00 0.00 0.00 00:44:45.327 00:44:46.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:46.263 Nvme0n1 : 9.00 8928.56 34.88 0.00 0.00 0.00 0.00 0.00 00:44:46.263 [2024-12-09T09:55:53.307Z] =================================================================================================================== 00:44:46.263 [2024-12-09T09:55:53.307Z] Total : 8928.56 34.88 0.00 0.00 0.00 0.00 0.00 00:44:46.263 00:44:47.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:47.201 Nvme0n1 : 10.00 8991.80 35.12 0.00 0.00 0.00 0.00 0.00 00:44:47.201 [2024-12-09T09:55:54.245Z] =================================================================================================================== 00:44:47.201 [2024-12-09T09:55:54.245Z] Total : 8991.80 35.12 0.00 0.00 0.00 0.00 0.00 00:44:47.201 00:44:47.201 00:44:47.201 Latency(us) 00:44:47.201 [2024-12-09T09:55:54.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:47.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:47.201 Nvme0n1 : 10.01 8996.12 35.14 0.00 0.00 14221.36 5237.16 831534.50 00:44:47.201 [2024-12-09T09:55:54.245Z] =================================================================================================================== 00:44:47.201 [2024-12-09T09:55:54.245Z] Total : 8996.12 35.14 0.00 0.00 14221.36 5237.16 831534.50 00:44:47.201 { 00:44:47.201 "results": [ 00:44:47.201 { 00:44:47.201 "job": "Nvme0n1", 00:44:47.201 "core_mask": "0x2", 00:44:47.201 "workload": "randwrite", 00:44:47.201 "status": "finished", 00:44:47.201 "queue_depth": 128, 00:44:47.201 "io_size": 4096, 00:44:47.201 "runtime": 10.009431, 00:44:47.201 "iops": 8996.115763223705, 00:44:47.201 "mibps": 35.141077200092596, 00:44:47.201 "io_failed": 0, 00:44:47.201 "io_timeout": 0, 00:44:47.201 "avg_latency_us": 14221.355928958968, 00:44:47.201 "min_latency_us": 5237.156331877729, 00:44:47.201 "max_latency_us": 831534.5048034935 00:44:47.201 } 00:44:47.201 ], 00:44:47.201 "core_count": 1 00:44:47.201 } 00:44:47.201 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67393 00:44:47.201 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 67393 ']' 00:44:47.201 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 67393 00:44:47.201 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:44:47.201 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:47.201 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67393 00:44:47.201 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:47.201 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:47.201 killing process with pid 67393 00:44:47.201 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67393' 00:44:47.201 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 67393 00:44:47.201 Received shutdown signal, test time was about 10.000000 seconds 00:44:47.201 00:44:47.201 Latency(us) 00:44:47.201 [2024-12-09T09:55:54.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:47.201 [2024-12-09T09:55:54.245Z] =================================================================================================================== 00:44:47.201 [2024-12-09T09:55:54.245Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:47.201 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 67393 00:44:47.460 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:44:47.719 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:47.978 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:44:47.978 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:47.978 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:44:47.978 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:44:47.978 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66776 00:44:47.978 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66776 00:44:48.237 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66776 Killed "${NVMF_APP[@]}" "$@" 00:44:48.237 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:44:48.237 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:44:48.237 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:48.237 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:48.237 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:48.237 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=67599 00:44:48.237 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 67599 00:44:48.238 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:44:48.238 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67599 ']' 00:44:48.238 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:48.238 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:48.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:48.238 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:48.238 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:48.238 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:48.238 [2024-12-09 09:55:55.101530] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:48.238 [2024-12-09 09:55:55.101607] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:48.238 [2024-12-09 09:55:55.252849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:48.497 [2024-12-09 09:55:55.306052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:48.497 [2024-12-09 09:55:55.306104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:48.497 [2024-12-09 09:55:55.306111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:48.497 [2024-12-09 09:55:55.306115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:48.497 [2024-12-09 09:55:55.306120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:48.497 [2024-12-09 09:55:55.306379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:49.066 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:49.066 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:44:49.066 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:49.066 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:49.066 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:49.066 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:49.067 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:49.328 [2024-12-09 09:55:56.296234] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:44:49.328 [2024-12-09 09:55:56.296453] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:44:49.328 [2024-12-09 09:55:56.296592] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:44:49.328 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:44:49.328 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f480e75b-a9ea-498f-b863-bcdc3ebde34b 00:44:49.328 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f480e75b-a9ea-498f-b863-bcdc3ebde34b 00:44:49.328 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:49.328 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:44:49.328 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:49.328 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:49.328 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:49.589 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f480e75b-a9ea-498f-b863-bcdc3ebde34b -t 2000 00:44:49.848 [ 00:44:49.848 { 00:44:49.848 "aliases": [ 00:44:49.848 "lvs/lvol" 00:44:49.848 ], 00:44:49.848 "assigned_rate_limits": { 00:44:49.848 "r_mbytes_per_sec": 0, 00:44:49.848 "rw_ios_per_sec": 0, 00:44:49.848 "rw_mbytes_per_sec": 0, 00:44:49.848 "w_mbytes_per_sec": 0 00:44:49.848 }, 00:44:49.848 "block_size": 4096, 00:44:49.848 "claimed": false, 00:44:49.848 "driver_specific": { 00:44:49.848 "lvol": { 00:44:49.848 "base_bdev": "aio_bdev", 00:44:49.848 "clone": false, 00:44:49.848 "esnap_clone": false, 00:44:49.848 "lvol_store_uuid": "5e7904ec-cc2a-4ed7-9428-db4cd60154b4", 00:44:49.848 "num_allocated_clusters": 38, 00:44:49.848 "snapshot": false, 00:44:49.848 "thin_provision": false 00:44:49.848 } 00:44:49.848 }, 00:44:49.848 "name": "f480e75b-a9ea-498f-b863-bcdc3ebde34b", 00:44:49.848 "num_blocks": 38912, 00:44:49.848 "product_name": "Logical Volume", 00:44:49.848 "supported_io_types": { 00:44:49.848 "abort": false, 00:44:49.848 "compare": false, 00:44:49.848 "compare_and_write": false, 00:44:49.848 "copy": false, 00:44:49.848 "flush": false, 00:44:49.848 "get_zone_info": false, 00:44:49.848 "nvme_admin": false, 00:44:49.848 "nvme_io": false, 00:44:49.848 "nvme_io_md": false, 00:44:49.848 "nvme_iov_md": false, 00:44:49.848 "read": true, 00:44:49.848 "reset": true, 00:44:49.848 "seek_data": true, 00:44:49.848 "seek_hole": true, 00:44:49.848 "unmap": true, 00:44:49.848 "write": true, 00:44:49.848 "write_zeroes": true, 00:44:49.848 "zcopy": false, 00:44:49.848 "zone_append": false, 00:44:49.848 "zone_management": false 00:44:49.848 }, 00:44:49.848 "uuid": "f480e75b-a9ea-498f-b863-bcdc3ebde34b", 00:44:49.848 "zoned": false 00:44:49.848 } 00:44:49.848 ] 00:44:49.848 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:44:49.848 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:49.848 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:44:50.107 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:44:50.107 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:50.107 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:44:50.367 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:44:50.367 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:50.627 [2024-12-09 09:55:57.460164] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:44:50.627 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:50.627 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:44:50.627 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:50.627 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:50.627 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:50.627 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:50.627 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:50.627 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:50.627 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:50.627 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:50.627 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:44:50.627 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:50.886 2024/12/09 09:55:57 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:5e7904ec-cc2a-4ed7-9428-db4cd60154b4], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:44:50.886 request: 00:44:50.886 { 00:44:50.886 "method": "bdev_lvol_get_lvstores", 00:44:50.886 "params": { 00:44:50.886 "uuid": "5e7904ec-cc2a-4ed7-9428-db4cd60154b4" 00:44:50.886 } 00:44:50.886 } 00:44:50.886 Got JSON-RPC error response 00:44:50.886 GoRPCClient: error on JSON-RPC call 00:44:50.886 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:44:50.886 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:50.887 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:50.887 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:50.887 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:51.145 aio_bdev 00:44:51.145 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f480e75b-a9ea-498f-b863-bcdc3ebde34b 00:44:51.145 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f480e75b-a9ea-498f-b863-bcdc3ebde34b 00:44:51.145 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:51.145 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:44:51.145 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:51.145 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:51.145 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:51.405 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f480e75b-a9ea-498f-b863-bcdc3ebde34b -t 2000 00:44:51.663 [ 00:44:51.663 { 00:44:51.663 "aliases": [ 00:44:51.663 "lvs/lvol" 00:44:51.663 ], 00:44:51.663 "assigned_rate_limits": { 00:44:51.663 "r_mbytes_per_sec": 0, 00:44:51.663 "rw_ios_per_sec": 0, 00:44:51.663 "rw_mbytes_per_sec": 0, 00:44:51.663 "w_mbytes_per_sec": 0 00:44:51.663 }, 00:44:51.663 "block_size": 4096, 00:44:51.664 "claimed": false, 00:44:51.664 "driver_specific": { 00:44:51.664 "lvol": { 00:44:51.664 "base_bdev": "aio_bdev", 00:44:51.664 "clone": false, 00:44:51.664 "esnap_clone": false, 00:44:51.664 "lvol_store_uuid": "5e7904ec-cc2a-4ed7-9428-db4cd60154b4", 00:44:51.664 "num_allocated_clusters": 38, 00:44:51.664 "snapshot": false, 00:44:51.664 "thin_provision": false 00:44:51.664 } 00:44:51.664 }, 00:44:51.664 "name": "f480e75b-a9ea-498f-b863-bcdc3ebde34b", 00:44:51.664 "num_blocks": 38912, 00:44:51.664 "product_name": "Logical Volume", 00:44:51.664 "supported_io_types": { 00:44:51.664 "abort": false, 00:44:51.664 "compare": false, 00:44:51.664 "compare_and_write": false, 00:44:51.664 "copy": false, 00:44:51.664 "flush": false, 00:44:51.664 "get_zone_info": false, 00:44:51.664 "nvme_admin": false, 00:44:51.664 "nvme_io": false, 00:44:51.664 "nvme_io_md": false, 00:44:51.664 "nvme_iov_md": false, 00:44:51.664 "read": true, 00:44:51.664 "reset": true, 00:44:51.664 "seek_data": true, 00:44:51.664 "seek_hole": true, 00:44:51.664 "unmap": true, 00:44:51.664 "write": true, 00:44:51.664 "write_zeroes": true, 00:44:51.664 "zcopy": false, 00:44:51.664 "zone_append": false, 00:44:51.664 "zone_management": false 00:44:51.664 }, 00:44:51.664 "uuid": "f480e75b-a9ea-498f-b863-bcdc3ebde34b", 00:44:51.664 "zoned": false 00:44:51.664 } 00:44:51.664 ] 00:44:51.664 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:44:51.664 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:51.664 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:44:51.923 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:44:51.923 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:44:51.923 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:52.181 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:44:52.181 09:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f480e75b-a9ea-498f-b863-bcdc3ebde34b 00:44:52.181 09:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5e7904ec-cc2a-4ed7-9428-db4cd60154b4 00:44:52.440 09:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:52.698 09:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:44:53.268 00:44:53.268 real 0m19.941s 00:44:53.268 user 0m42.117s 00:44:53.268 sys 0m6.707s 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:53.268 ************************************ 00:44:53.268 END TEST lvs_grow_dirty 00:44:53.268 ************************************ 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:44:53.268 nvmf_trace.0 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:53.268 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:44:53.526 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:53.526 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:44:53.526 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:53.526 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:53.526 rmmod nvme_tcp 00:44:53.526 rmmod nvme_fabrics 00:44:53.527 rmmod nvme_keyring 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 67599 ']' 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 67599 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 67599 ']' 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 67599 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67599 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67599' 00:44:53.527 killing process with pid 67599 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 67599 00:44:53.527 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 67599 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:44:53.785 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:44:54.044 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:44:54.044 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:44:54.044 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:54.044 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:54.044 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:44:54.044 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:54.044 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:54.044 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:54.044 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:44:54.044 00:44:54.044 real 0m41.031s 00:44:54.044 user 1m6.012s 00:44:54.044 sys 0m9.928s 00:44:54.044 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:54.044 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:44:54.044 ************************************ 00:44:54.044 END TEST nvmf_lvs_grow 00:44:54.044 ************************************ 00:44:54.044 09:56:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:44:54.044 09:56:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:54.044 09:56:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:54.044 09:56:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:44:54.303 ************************************ 00:44:54.303 START TEST nvmf_bdev_io_wait 00:44:54.303 ************************************ 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:44:54.303 * Looking for test storage... 00:44:54.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:54.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:54.303 --rc genhtml_branch_coverage=1 00:44:54.303 --rc genhtml_function_coverage=1 00:44:54.303 --rc genhtml_legend=1 00:44:54.303 --rc geninfo_all_blocks=1 00:44:54.303 --rc geninfo_unexecuted_blocks=1 00:44:54.303 00:44:54.303 ' 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:54.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:54.303 --rc genhtml_branch_coverage=1 00:44:54.303 --rc genhtml_function_coverage=1 00:44:54.303 --rc genhtml_legend=1 00:44:54.303 --rc geninfo_all_blocks=1 00:44:54.303 --rc geninfo_unexecuted_blocks=1 00:44:54.303 00:44:54.303 ' 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:54.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:54.303 --rc genhtml_branch_coverage=1 00:44:54.303 --rc genhtml_function_coverage=1 00:44:54.303 --rc genhtml_legend=1 00:44:54.303 --rc geninfo_all_blocks=1 00:44:54.303 --rc geninfo_unexecuted_blocks=1 00:44:54.303 00:44:54.303 ' 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:54.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:54.303 --rc genhtml_branch_coverage=1 00:44:54.303 --rc genhtml_function_coverage=1 00:44:54.303 --rc genhtml_legend=1 00:44:54.303 --rc geninfo_all_blocks=1 00:44:54.303 --rc geninfo_unexecuted_blocks=1 00:44:54.303 00:44:54.303 ' 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:54.303 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:54.304 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:54.304 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:44:54.563 Cannot find device "nvmf_init_br" 00:44:54.563 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:44:54.563 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:44:54.563 Cannot find device "nvmf_init_br2" 00:44:54.563 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:44:54.563 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:44:54.563 Cannot find device "nvmf_tgt_br" 00:44:54.563 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:44:54.563 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:44:54.563 Cannot find device "nvmf_tgt_br2" 00:44:54.563 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:44:54.563 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:44:54.563 Cannot find device "nvmf_init_br" 00:44:54.563 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:44:54.563 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:44:54.563 Cannot find device "nvmf_init_br2" 00:44:54.563 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:44:54.563 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:44:54.563 Cannot find device "nvmf_tgt_br" 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:44:54.564 Cannot find device "nvmf_tgt_br2" 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:44:54.564 Cannot find device "nvmf_br" 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:44:54.564 Cannot find device "nvmf_init_if" 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:44:54.564 Cannot find device "nvmf_init_if2" 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:54.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:54.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:44:54.564 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:44:54.823 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:44:54.824 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:54.824 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:44:54.824 00:44:54.824 --- 10.0.0.3 ping statistics --- 00:44:54.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:54.824 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:44:54.824 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:44:54.824 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:44:54.824 00:44:54.824 --- 10.0.0.4 ping statistics --- 00:44:54.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:54.824 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:54.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:54.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:44:54.824 00:44:54.824 --- 10.0.0.1 ping statistics --- 00:44:54.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:54.824 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:44:54.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:54.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:44:54.824 00:44:54.824 --- 10.0.0.2 ping statistics --- 00:44:54.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:54.824 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=68068 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 68068 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 68068 ']' 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:54.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:54.824 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:54.824 [2024-12-09 09:56:01.810284] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:54.824 [2024-12-09 09:56:01.810353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:55.109 [2024-12-09 09:56:01.962864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:55.109 [2024-12-09 09:56:02.020199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:55.109 [2024-12-09 09:56:02.020250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:55.109 [2024-12-09 09:56:02.020256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:55.109 [2024-12-09 09:56:02.020261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:55.109 [2024-12-09 09:56:02.020281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:55.109 [2024-12-09 09:56:02.021381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:55.109 [2024-12-09 09:56:02.021251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:55.109 [2024-12-09 09:56:02.021383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:55.109 [2024-12-09 09:56:02.021296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:56.078 [2024-12-09 09:56:02.901545] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:56.078 Malloc0 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:56.078 [2024-12-09 09:56:02.964189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=68121 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=68123 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:56.078 { 00:44:56.078 "params": { 00:44:56.078 "name": "Nvme$subsystem", 00:44:56.078 "trtype": "$TEST_TRANSPORT", 00:44:56.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:56.078 "adrfam": "ipv4", 00:44:56.078 "trsvcid": "$NVMF_PORT", 00:44:56.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:56.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:56.078 "hdgst": ${hdgst:-false}, 00:44:56.078 "ddgst": ${ddgst:-false} 00:44:56.078 }, 00:44:56.078 "method": "bdev_nvme_attach_controller" 00:44:56.078 } 00:44:56.078 EOF 00:44:56.078 )") 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:56.078 { 00:44:56.078 "params": { 00:44:56.078 "name": "Nvme$subsystem", 00:44:56.078 "trtype": "$TEST_TRANSPORT", 00:44:56.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:56.078 "adrfam": "ipv4", 00:44:56.078 "trsvcid": "$NVMF_PORT", 00:44:56.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:56.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:56.078 "hdgst": ${hdgst:-false}, 00:44:56.078 "ddgst": ${ddgst:-false} 00:44:56.078 }, 00:44:56.078 "method": "bdev_nvme_attach_controller" 00:44:56.078 } 00:44:56.078 EOF 00:44:56.078 )") 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=68125 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:44:56.078 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:56.079 "params": { 00:44:56.079 "name": "Nvme1", 00:44:56.079 "trtype": "tcp", 00:44:56.079 "traddr": "10.0.0.3", 00:44:56.079 "adrfam": "ipv4", 00:44:56.079 "trsvcid": "4420", 00:44:56.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:56.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:56.079 "hdgst": false, 00:44:56.079 "ddgst": false 00:44:56.079 }, 00:44:56.079 "method": "bdev_nvme_attach_controller" 00:44:56.079 }' 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=68130 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:56.079 { 00:44:56.079 "params": { 00:44:56.079 "name": "Nvme$subsystem", 00:44:56.079 "trtype": "$TEST_TRANSPORT", 00:44:56.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:56.079 "adrfam": "ipv4", 00:44:56.079 "trsvcid": "$NVMF_PORT", 00:44:56.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:56.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:56.079 "hdgst": ${hdgst:-false}, 00:44:56.079 "ddgst": ${ddgst:-false} 00:44:56.079 }, 00:44:56.079 "method": "bdev_nvme_attach_controller" 00:44:56.079 } 00:44:56.079 EOF 00:44:56.079 )") 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:56.079 { 00:44:56.079 "params": { 00:44:56.079 "name": "Nvme$subsystem", 00:44:56.079 "trtype": "$TEST_TRANSPORT", 00:44:56.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:56.079 "adrfam": "ipv4", 00:44:56.079 "trsvcid": "$NVMF_PORT", 00:44:56.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:56.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:56.079 "hdgst": ${hdgst:-false}, 00:44:56.079 "ddgst": ${ddgst:-false} 00:44:56.079 }, 00:44:56.079 "method": "bdev_nvme_attach_controller" 00:44:56.079 } 00:44:56.079 EOF 00:44:56.079 )") 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:56.079 "params": { 00:44:56.079 "name": "Nvme1", 00:44:56.079 "trtype": "tcp", 00:44:56.079 "traddr": "10.0.0.3", 00:44:56.079 "adrfam": "ipv4", 00:44:56.079 "trsvcid": "4420", 00:44:56.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:56.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:56.079 "hdgst": false, 00:44:56.079 "ddgst": false 00:44:56.079 }, 00:44:56.079 "method": "bdev_nvme_attach_controller" 00:44:56.079 }' 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:44:56.079 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:44:56.079 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:44:56.079 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:44:56.079 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:56.079 "params": { 00:44:56.079 "name": "Nvme1", 00:44:56.079 "trtype": "tcp", 00:44:56.079 "traddr": "10.0.0.3", 00:44:56.079 "adrfam": "ipv4", 00:44:56.079 "trsvcid": "4420", 00:44:56.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:56.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:56.079 "hdgst": false, 00:44:56.079 "ddgst": false 00:44:56.079 }, 00:44:56.079 "method": "bdev_nvme_attach_controller" 00:44:56.079 }' 00:44:56.079 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:44:56.079 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:56.079 "params": { 00:44:56.079 "name": "Nvme1", 00:44:56.079 "trtype": "tcp", 00:44:56.079 "traddr": "10.0.0.3", 00:44:56.079 "adrfam": "ipv4", 00:44:56.079 "trsvcid": "4420", 00:44:56.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:56.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:56.079 "hdgst": false, 00:44:56.079 "ddgst": false 00:44:56.079 }, 00:44:56.079 "method": "bdev_nvme_attach_controller" 00:44:56.079 }' 00:44:56.079 [2024-12-09 09:56:03.029408] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:56.079 [2024-12-09 09:56:03.029909] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:44:56.079 [2024-12-09 09:56:03.048804] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:56.079 [2024-12-09 09:56:03.048804] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:56.079 [2024-12-09 09:56:03.048898] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:44:56.079 [2024-12-09 09:56:03.049000] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:44:56.079 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 68121 00:44:56.079 [2024-12-09 09:56:03.074740] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:56.079 [2024-12-09 09:56:03.074945] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:44:56.338 [2024-12-09 09:56:03.242887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:56.338 [2024-12-09 09:56:03.289500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:44:56.338 [2024-12-09 09:56:03.309214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:56.338 [2024-12-09 09:56:03.355678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:44:56.338 [2024-12-09 09:56:03.377238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:56.597 Running I/O for 1 seconds... 00:44:56.597 [2024-12-09 09:56:03.433427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:44:56.597 [2024-12-09 09:56:03.441207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:56.597 Running I/O for 1 seconds... 00:44:56.597 [2024-12-09 09:56:03.486086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:44:56.597 Running I/O for 1 seconds... 00:44:56.597 Running I/O for 1 seconds... 00:44:57.535 7217.00 IOPS, 28.19 MiB/s 00:44:57.535 Latency(us) 00:44:57.535 [2024-12-09T09:56:04.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:57.535 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:44:57.535 Nvme1n1 : 1.02 7219.10 28.20 0.00 0.00 17495.94 7498.01 34113.06 00:44:57.535 [2024-12-09T09:56:04.579Z] =================================================================================================================== 00:44:57.535 [2024-12-09T09:56:04.579Z] Total : 7219.10 28.20 0.00 0.00 17495.94 7498.01 34113.06 00:44:57.535 11233.00 IOPS, 43.88 MiB/s 00:44:57.535 Latency(us) 00:44:57.535 [2024-12-09T09:56:04.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:57.535 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:44:57.535 Nvme1n1 : 1.01 11294.38 44.12 0.00 0.00 11294.89 4865.12 22551.25 00:44:57.535 [2024-12-09T09:56:04.579Z] =================================================================================================================== 00:44:57.535 [2024-12-09T09:56:04.579Z] Total : 11294.38 44.12 0.00 0.00 11294.89 4865.12 22551.25 00:44:57.795 208264.00 IOPS, 813.53 MiB/s 00:44:57.795 Latency(us) 00:44:57.795 [2024-12-09T09:56:04.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:57.795 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:44:57.795 Nvme1n1 : 1.00 207902.14 812.12 0.00 0.00 612.46 259.35 3334.04 00:44:57.795 [2024-12-09T09:56:04.839Z] =================================================================================================================== 00:44:57.795 [2024-12-09T09:56:04.839Z] Total : 207902.14 812.12 0.00 0.00 612.46 259.35 3334.04 00:44:57.795 7771.00 IOPS, 30.36 MiB/s 00:44:57.795 Latency(us) 00:44:57.795 [2024-12-09T09:56:04.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:57.795 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:44:57.795 Nvme1n1 : 1.00 7879.88 30.78 0.00 0.00 16207.20 3019.23 45560.40 00:44:57.795 [2024-12-09T09:56:04.839Z] =================================================================================================================== 00:44:57.795 [2024-12-09T09:56:04.839Z] Total : 7879.88 30.78 0.00 0.00 16207.20 3019.23 45560.40 00:44:57.795 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 68123 00:44:57.795 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 68125 00:44:57.795 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 68130 00:44:57.795 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:57.795 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.795 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:57.795 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.795 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:44:57.795 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:44:57.795 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:57.795 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:58.056 rmmod nvme_tcp 00:44:58.056 rmmod nvme_fabrics 00:44:58.056 rmmod nvme_keyring 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 68068 ']' 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 68068 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 68068 ']' 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 68068 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68068 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:58.056 killing process with pid 68068 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68068' 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 68068 00:44:58.056 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 68068 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:58.315 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:58.574 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:44:58.574 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:58.574 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:58.574 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:58.574 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:44:58.574 00:44:58.574 real 0m4.325s 00:44:58.574 user 0m17.568s 00:44:58.574 sys 0m1.871s 00:44:58.574 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:58.574 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:58.574 ************************************ 00:44:58.574 END TEST nvmf_bdev_io_wait 00:44:58.574 ************************************ 00:44:58.574 09:56:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:44:58.574 09:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:58.574 09:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:58.574 09:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:44:58.574 ************************************ 00:44:58.574 START TEST nvmf_queue_depth 00:44:58.574 ************************************ 00:44:58.574 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:44:58.575 * Looking for test storage... 00:44:58.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:58.575 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:58.575 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:44:58.575 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:58.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:58.835 --rc genhtml_branch_coverage=1 00:44:58.835 --rc genhtml_function_coverage=1 00:44:58.835 --rc genhtml_legend=1 00:44:58.835 --rc geninfo_all_blocks=1 00:44:58.835 --rc geninfo_unexecuted_blocks=1 00:44:58.835 00:44:58.835 ' 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:58.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:58.835 --rc genhtml_branch_coverage=1 00:44:58.835 --rc genhtml_function_coverage=1 00:44:58.835 --rc genhtml_legend=1 00:44:58.835 --rc geninfo_all_blocks=1 00:44:58.835 --rc geninfo_unexecuted_blocks=1 00:44:58.835 00:44:58.835 ' 00:44:58.835 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:58.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:58.835 --rc genhtml_branch_coverage=1 00:44:58.835 --rc genhtml_function_coverage=1 00:44:58.835 --rc genhtml_legend=1 00:44:58.836 --rc geninfo_all_blocks=1 00:44:58.836 --rc geninfo_unexecuted_blocks=1 00:44:58.836 00:44:58.836 ' 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:58.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:58.836 --rc genhtml_branch_coverage=1 00:44:58.836 --rc genhtml_function_coverage=1 00:44:58.836 --rc genhtml_legend=1 00:44:58.836 --rc geninfo_all_blocks=1 00:44:58.836 --rc geninfo_unexecuted_blocks=1 00:44:58.836 00:44:58.836 ' 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:58.836 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:44:58.836 Cannot find device "nvmf_init_br" 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:44:58.836 Cannot find device "nvmf_init_br2" 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:44:58.836 Cannot find device "nvmf_tgt_br" 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:44:58.836 Cannot find device "nvmf_tgt_br2" 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:44:58.836 Cannot find device "nvmf_init_br" 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:44:58.836 Cannot find device "nvmf_init_br2" 00:44:58.836 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:44:58.837 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:44:59.096 Cannot find device "nvmf_tgt_br" 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:44:59.096 Cannot find device "nvmf_tgt_br2" 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:44:59.096 Cannot find device "nvmf_br" 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:44:59.096 Cannot find device "nvmf_init_if" 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:44:59.096 Cannot find device "nvmf_init_if2" 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:59.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:59.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:44:59.096 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:44:59.096 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:59.355 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:59.355 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:59.355 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:44:59.355 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:44:59.355 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:44:59.355 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:59.355 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:44:59.355 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:44:59.355 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:59.355 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.128 ms 00:44:59.355 00:44:59.355 --- 10.0.0.3 ping statistics --- 00:44:59.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:59.355 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:44:59.355 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:44:59.355 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:44:59.355 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:44:59.355 00:44:59.355 --- 10.0.0.4 ping statistics --- 00:44:59.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:59.355 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:44:59.355 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:59.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:59.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:44:59.355 00:44:59.355 --- 10.0.0.1 ping statistics --- 00:44:59.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:59.355 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:44:59.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:59.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:44:59.356 00:44:59.356 --- 10.0.0.2 ping statistics --- 00:44:59.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:59.356 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=68415 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 68415 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68415 ']' 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:59.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:59.356 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:59.356 [2024-12-09 09:56:06.306965] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:44:59.356 [2024-12-09 09:56:06.307044] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:59.615 [2024-12-09 09:56:06.459042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:59.615 [2024-12-09 09:56:06.512923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:59.615 [2024-12-09 09:56:06.512971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:59.615 [2024-12-09 09:56:06.512977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:59.615 [2024-12-09 09:56:06.512983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:59.615 [2024-12-09 09:56:06.512987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:59.615 [2024-12-09 09:56:06.513251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:00.551 [2024-12-09 09:56:07.303076] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:00.551 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:00.551 Malloc0 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:00.552 [2024-12-09 09:56:07.362107] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=68465 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 68465 /var/tmp/bdevperf.sock 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68465 ']' 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:00.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:00.552 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:00.552 [2024-12-09 09:56:07.427175] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:45:00.552 [2024-12-09 09:56:07.427252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68465 ] 00:45:00.552 [2024-12-09 09:56:07.575591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:00.812 [2024-12-09 09:56:07.631618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:01.382 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:01.382 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:45:01.382 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:45:01.382 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.382 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:01.641 NVMe0n1 00:45:01.641 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.641 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:45:01.641 Running I/O for 10 seconds... 00:45:03.516 10249.00 IOPS, 40.04 MiB/s [2024-12-09T09:56:11.934Z] 10754.50 IOPS, 42.01 MiB/s [2024-12-09T09:56:12.875Z] 10922.33 IOPS, 42.67 MiB/s [2024-12-09T09:56:13.812Z] 10927.50 IOPS, 42.69 MiB/s [2024-12-09T09:56:14.750Z] 11006.00 IOPS, 42.99 MiB/s [2024-12-09T09:56:15.687Z] 11091.33 IOPS, 43.33 MiB/s [2024-12-09T09:56:16.624Z] 11179.86 IOPS, 43.67 MiB/s [2024-12-09T09:56:17.560Z] 11193.88 IOPS, 43.73 MiB/s [2024-12-09T09:56:18.937Z] 11263.56 IOPS, 44.00 MiB/s [2024-12-09T09:56:18.937Z] 11335.40 IOPS, 44.28 MiB/s 00:45:11.893 Latency(us) 00:45:11.893 [2024-12-09T09:56:18.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:11.893 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:45:11.893 Verification LBA range: start 0x0 length 0x4000 00:45:11.893 NVMe0n1 : 10.06 11364.21 44.39 0.00 0.00 89786.50 21749.94 64562.98 00:45:11.893 [2024-12-09T09:56:18.937Z] =================================================================================================================== 00:45:11.893 [2024-12-09T09:56:18.937Z] Total : 11364.21 44.39 0.00 0.00 89786.50 21749.94 64562.98 00:45:11.893 { 00:45:11.893 "results": [ 00:45:11.893 { 00:45:11.893 "job": "NVMe0n1", 00:45:11.893 "core_mask": "0x1", 00:45:11.893 "workload": "verify", 00:45:11.893 "status": "finished", 00:45:11.893 "verify_range": { 00:45:11.893 "start": 0, 00:45:11.893 "length": 16384 00:45:11.893 }, 00:45:11.893 "queue_depth": 1024, 00:45:11.893 "io_size": 4096, 00:45:11.893 "runtime": 10.064136, 00:45:11.893 "iops": 11364.214474049239, 00:45:11.893 "mibps": 44.39146278925484, 00:45:11.893 "io_failed": 0, 00:45:11.893 "io_timeout": 0, 00:45:11.893 "avg_latency_us": 89786.4998742505, 00:45:11.893 "min_latency_us": 21749.93886462882, 00:45:11.893 "max_latency_us": 64562.97641921398 00:45:11.893 } 00:45:11.893 ], 00:45:11.893 "core_count": 1 00:45:11.893 } 00:45:11.893 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 68465 00:45:11.893 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68465 ']' 00:45:11.893 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68465 00:45:11.893 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68465 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:11.894 killing process with pid 68465 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68465' 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68465 00:45:11.894 Received shutdown signal, test time was about 10.000000 seconds 00:45:11.894 00:45:11.894 Latency(us) 00:45:11.894 [2024-12-09T09:56:18.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:11.894 [2024-12-09T09:56:18.938Z] =================================================================================================================== 00:45:11.894 [2024-12-09T09:56:18.938Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68465 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:11.894 rmmod nvme_tcp 00:45:11.894 rmmod nvme_fabrics 00:45:11.894 rmmod nvme_keyring 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 68415 ']' 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 68415 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68415 ']' 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68415 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:11.894 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68415 00:45:12.153 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:12.153 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:12.153 killing process with pid 68415 00:45:12.153 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68415' 00:45:12.153 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68415 00:45:12.153 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68415 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:12.153 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:45:12.412 00:45:12.412 real 0m13.933s 00:45:12.412 user 0m23.610s 00:45:12.412 sys 0m2.074s 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:12.412 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:12.412 ************************************ 00:45:12.412 END TEST nvmf_queue_depth 00:45:12.412 ************************************ 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:45:12.671 ************************************ 00:45:12.671 START TEST nvmf_target_multipath 00:45:12.671 ************************************ 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:45:12.671 * Looking for test storage... 00:45:12.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:12.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:12.671 --rc genhtml_branch_coverage=1 00:45:12.671 --rc genhtml_function_coverage=1 00:45:12.671 --rc genhtml_legend=1 00:45:12.671 --rc geninfo_all_blocks=1 00:45:12.671 --rc geninfo_unexecuted_blocks=1 00:45:12.671 00:45:12.671 ' 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:12.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:12.671 --rc genhtml_branch_coverage=1 00:45:12.671 --rc genhtml_function_coverage=1 00:45:12.671 --rc genhtml_legend=1 00:45:12.671 --rc geninfo_all_blocks=1 00:45:12.671 --rc geninfo_unexecuted_blocks=1 00:45:12.671 00:45:12.671 ' 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:12.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:12.671 --rc genhtml_branch_coverage=1 00:45:12.671 --rc genhtml_function_coverage=1 00:45:12.671 --rc genhtml_legend=1 00:45:12.671 --rc geninfo_all_blocks=1 00:45:12.671 --rc geninfo_unexecuted_blocks=1 00:45:12.671 00:45:12.671 ' 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:12.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:12.671 --rc genhtml_branch_coverage=1 00:45:12.671 --rc genhtml_function_coverage=1 00:45:12.671 --rc genhtml_legend=1 00:45:12.671 --rc geninfo_all_blocks=1 00:45:12.671 --rc geninfo_unexecuted_blocks=1 00:45:12.671 00:45:12.671 ' 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:12.671 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:12.672 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:12.672 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:12.931 Cannot find device "nvmf_init_br" 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:12.931 Cannot find device "nvmf_init_br2" 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:12.931 Cannot find device "nvmf_tgt_br" 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:12.931 Cannot find device "nvmf_tgt_br2" 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:12.931 Cannot find device "nvmf_init_br" 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:12.931 Cannot find device "nvmf_init_br2" 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:12.931 Cannot find device "nvmf_tgt_br" 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:45:12.931 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:12.932 Cannot find device "nvmf_tgt_br2" 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:12.932 Cannot find device "nvmf_br" 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:12.932 Cannot find device "nvmf_init_if" 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:12.932 Cannot find device "nvmf_init_if2" 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:12.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:12.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:12.932 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:13.191 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:13.191 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:13.191 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:13.191 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:13.191 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:13.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:13.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:45:13.191 00:45:13.191 --- 10.0.0.3 ping statistics --- 00:45:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:13.191 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:13.191 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:13.191 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:45:13.191 00:45:13.191 --- 10.0.0.4 ping statistics --- 00:45:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:13.191 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:13.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:13.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:45:13.191 00:45:13.191 --- 10.0.0.1 ping statistics --- 00:45:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:13.191 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:13.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:13.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:45:13.191 00:45:13.191 --- 10.0.0.2 ping statistics --- 00:45:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:13.191 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=68853 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 68853 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 68853 ']' 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:13.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:13.191 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:45:13.191 [2024-12-09 09:56:20.160964] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:45:13.191 [2024-12-09 09:56:20.161048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:13.449 [2024-12-09 09:56:20.308926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:13.449 [2024-12-09 09:56:20.363678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:13.449 [2024-12-09 09:56:20.363734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:13.449 [2024-12-09 09:56:20.363742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:13.449 [2024-12-09 09:56:20.363747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:13.449 [2024-12-09 09:56:20.363751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:13.449 [2024-12-09 09:56:20.364642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:13.449 [2024-12-09 09:56:20.364707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:13.449 [2024-12-09 09:56:20.364841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:13.449 [2024-12-09 09:56:20.364844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:14.384 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:14.384 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:45:14.384 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:14.384 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:14.384 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:45:14.384 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:14.384 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:45:14.384 [2024-12-09 09:56:21.327064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:14.384 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:45:14.643 Malloc0 00:45:14.643 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:45:14.902 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:15.224 09:56:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:15.497 [2024-12-09 09:56:22.359257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:15.497 09:56:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:45:15.756 [2024-12-09 09:56:22.619046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:45:15.756 09:56:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:45:16.015 09:56:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:45:16.274 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:45:16.274 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:45:16.274 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:45:16.274 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:45:16.274 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:45:18.178 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68991 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:45:18.179 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:45:18.179 [global] 00:45:18.179 thread=1 00:45:18.179 invalidate=1 00:45:18.179 rw=randrw 00:45:18.179 time_based=1 00:45:18.179 runtime=6 00:45:18.179 ioengine=libaio 00:45:18.179 direct=1 00:45:18.179 bs=4096 00:45:18.179 iodepth=128 00:45:18.179 norandommap=0 00:45:18.179 numjobs=1 00:45:18.179 00:45:18.179 verify_dump=1 00:45:18.179 verify_backlog=512 00:45:18.179 verify_state_save=0 00:45:18.179 do_verify=1 00:45:18.179 verify=crc32c-intel 00:45:18.179 [job0] 00:45:18.179 filename=/dev/nvme0n1 00:45:18.179 Could not set queue depth (nvme0n1) 00:45:18.438 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:45:18.438 fio-3.35 00:45:18.438 Starting 1 thread 00:45:19.374 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:45:19.374 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:45:19.632 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:45:21.041 09:56:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:45:21.041 09:56:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:45:21.041 09:56:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:45:21.041 09:56:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:45:21.041 09:56:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:45:21.299 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:45:21.299 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:45:21.299 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:45:21.299 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:45:21.299 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:45:21.299 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:45:21.299 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:45:21.299 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:45:21.299 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:45:21.299 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:45:21.299 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:45:21.299 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:45:21.300 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:45:22.235 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:45:22.235 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:45:22.235 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:45:22.235 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68991 00:45:24.764 00:45:24.764 job0: (groupid=0, jobs=1): err= 0: pid=69017: Mon Dec 9 09:56:31 2024 00:45:24.764 read: IOPS=13.2k, BW=51.5MiB/s (54.0MB/s)(309MiB/6005msec) 00:45:24.764 slat (usec): min=2, max=4789, avg=41.71, stdev=170.28 00:45:24.764 clat (usec): min=522, max=17463, avg=6714.15, stdev=1144.15 00:45:24.764 lat (usec): min=609, max=17475, avg=6755.86, stdev=1148.69 00:45:24.764 clat percentiles (usec): 00:45:24.764 | 1.00th=[ 4146], 5.00th=[ 5145], 10.00th=[ 5473], 20.00th=[ 5866], 00:45:24.764 | 30.00th=[ 6128], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 6849], 00:45:24.764 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 8029], 95.00th=[ 8717], 00:45:24.764 | 99.00th=[10159], 99.50th=[10683], 99.90th=[11731], 99.95th=[15926], 00:45:24.764 | 99.99th=[16581] 00:45:24.764 bw ( KiB/s): min= 9016, max=35384, per=52.43%, avg=27652.36, stdev=8229.19, samples=11 00:45:24.764 iops : min= 2254, max= 8846, avg=6913.09, stdev=2057.30, samples=11 00:45:24.764 write: IOPS=7726, BW=30.2MiB/s (31.6MB/s)(156MiB/5183msec); 0 zone resets 00:45:24.764 slat (usec): min=4, max=5289, avg=55.64, stdev=111.36 00:45:24.764 clat (usec): min=321, max=16450, avg=5731.00, stdev=1047.92 00:45:24.764 lat (usec): min=602, max=17163, avg=5786.65, stdev=1051.50 00:45:24.764 clat percentiles (usec): 00:45:24.764 | 1.00th=[ 3097], 5.00th=[ 4178], 10.00th=[ 4621], 20.00th=[ 5080], 00:45:24.764 | 30.00th=[ 5342], 40.00th=[ 5538], 50.00th=[ 5735], 60.00th=[ 5932], 00:45:24.764 | 70.00th=[ 6128], 80.00th=[ 6325], 90.00th=[ 6718], 95.00th=[ 7111], 00:45:24.764 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[14877], 99.95th=[15139], 00:45:24.764 | 99.99th=[15795] 00:45:24.764 bw ( KiB/s): min= 9336, max=34480, per=89.44%, avg=27640.00, stdev=7974.44, samples=11 00:45:24.764 iops : min= 2334, max= 8620, avg=6910.00, stdev=1993.61, samples=11 00:45:24.764 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:45:24.764 lat (msec) : 2=0.13%, 4=1.68%, 10=97.16%, 20=1.00% 00:45:24.764 cpu : usr=6.93%, sys=31.55%, ctx=8525, majf=0, minf=114 00:45:24.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:45:24.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:24.764 issued rwts: total=79178,40044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:24.764 00:45:24.764 Run status group 0 (all jobs): 00:45:24.764 READ: bw=51.5MiB/s (54.0MB/s), 51.5MiB/s-51.5MiB/s (54.0MB/s-54.0MB/s), io=309MiB (324MB), run=6005-6005msec 00:45:24.764 WRITE: bw=30.2MiB/s (31.6MB/s), 30.2MiB/s-30.2MiB/s (31.6MB/s-31.6MB/s), io=156MiB (164MB), run=5183-5183msec 00:45:24.764 00:45:24.764 Disk stats (read/write): 00:45:24.764 nvme0n1: ios=78182/39196, merge=0/0, ticks=470936/199937, in_queue=670873, util=98.61% 00:45:24.764 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:45:24.764 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:45:25.022 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:45:25.022 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:45:25.022 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:45:25.022 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:45:25.022 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:45:25.022 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:45:25.022 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:45:25.022 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:45:25.022 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:45:25.022 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:45:25.022 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:45:25.022 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:45:25.022 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:45:26.399 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:45:26.399 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:45:26.399 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:45:26.399 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:45:26.399 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=69152 00:45:26.399 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:45:26.399 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:45:26.399 [global] 00:45:26.399 thread=1 00:45:26.399 invalidate=1 00:45:26.399 rw=randrw 00:45:26.399 time_based=1 00:45:26.399 runtime=6 00:45:26.399 ioengine=libaio 00:45:26.399 direct=1 00:45:26.399 bs=4096 00:45:26.399 iodepth=128 00:45:26.399 norandommap=0 00:45:26.399 numjobs=1 00:45:26.399 00:45:26.399 verify_dump=1 00:45:26.399 verify_backlog=512 00:45:26.399 verify_state_save=0 00:45:26.399 do_verify=1 00:45:26.399 verify=crc32c-intel 00:45:26.399 [job0] 00:45:26.399 filename=/dev/nvme0n1 00:45:26.399 Could not set queue depth (nvme0n1) 00:45:26.399 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:45:26.399 fio-3.35 00:45:26.399 Starting 1 thread 00:45:27.334 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:45:27.334 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:45:27.593 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:45:28.542 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:45:28.542 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:45:28.542 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:45:28.542 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:45:28.803 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:45:29.062 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:45:29.062 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:45:29.062 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:45:29.062 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:45:29.062 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:45:29.062 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:45:29.063 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:45:29.063 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:45:29.063 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:45:29.063 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:45:29.063 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:45:29.063 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:45:29.063 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:45:30.001 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:45:30.001 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:45:30.001 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:45:30.001 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 69152 00:45:32.544 00:45:32.544 job0: (groupid=0, jobs=1): err= 0: pid=69173: Mon Dec 9 09:56:39 2024 00:45:32.544 read: IOPS=13.9k, BW=54.3MiB/s (56.9MB/s)(326MiB/6000msec) 00:45:32.544 slat (usec): min=3, max=4200, avg=35.81, stdev=149.22 00:45:32.544 clat (usec): min=781, max=16360, avg=6378.24, stdev=1309.07 00:45:32.544 lat (usec): min=870, max=16369, avg=6414.05, stdev=1319.10 00:45:32.544 clat percentiles (usec): 00:45:32.544 | 1.00th=[ 3359], 5.00th=[ 4178], 10.00th=[ 4686], 20.00th=[ 5342], 00:45:32.544 | 30.00th=[ 5800], 40.00th=[ 6128], 50.00th=[ 6390], 60.00th=[ 6718], 00:45:32.544 | 70.00th=[ 6980], 80.00th=[ 7373], 90.00th=[ 7832], 95.00th=[ 8356], 00:45:32.544 | 99.00th=[10028], 99.50th=[10552], 99.90th=[11600], 99.95th=[12387], 00:45:32.544 | 99.99th=[15401] 00:45:32.544 bw ( KiB/s): min= 7352, max=42536, per=50.85%, avg=28277.64, stdev=10863.62, samples=11 00:45:32.544 iops : min= 1838, max=10634, avg=7069.36, stdev=2715.87, samples=11 00:45:32.544 write: IOPS=8397, BW=32.8MiB/s (34.4MB/s)(167MiB/5098msec); 0 zone resets 00:45:32.544 slat (usec): min=8, max=1615, avg=48.50, stdev=96.09 00:45:32.544 clat (usec): min=237, max=14512, avg=5315.90, stdev=1293.45 00:45:32.544 lat (usec): min=386, max=14532, avg=5364.40, stdev=1303.46 00:45:32.544 clat percentiles (usec): 00:45:32.544 | 1.00th=[ 2442], 5.00th=[ 3163], 10.00th=[ 3523], 20.00th=[ 4080], 00:45:32.544 | 30.00th=[ 4686], 40.00th=[ 5211], 50.00th=[ 5538], 60.00th=[ 5800], 00:45:32.544 | 70.00th=[ 5997], 80.00th=[ 6259], 90.00th=[ 6652], 95.00th=[ 7046], 00:45:32.544 | 99.00th=[ 8717], 99.50th=[ 9372], 99.90th=[11207], 99.95th=[13698], 00:45:32.544 | 99.99th=[14484] 00:45:32.544 bw ( KiB/s): min= 7504, max=42952, per=84.24%, avg=28295.27, stdev=10756.46, samples=11 00:45:32.544 iops : min= 1876, max=10738, avg=7073.73, stdev=2689.05, samples=11 00:45:32.544 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:45:32.544 lat (msec) : 2=0.23%, 4=8.35%, 10=90.56%, 20=0.82% 00:45:32.544 cpu : usr=6.11%, sys=32.31%, ctx=9702, majf=0, minf=127 00:45:32.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:45:32.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:32.544 issued rwts: total=83419,42810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:32.544 00:45:32.544 Run status group 0 (all jobs): 00:45:32.544 READ: bw=54.3MiB/s (56.9MB/s), 54.3MiB/s-54.3MiB/s (56.9MB/s-56.9MB/s), io=326MiB (342MB), run=6000-6000msec 00:45:32.544 WRITE: bw=32.8MiB/s (34.4MB/s), 32.8MiB/s-32.8MiB/s (34.4MB/s-34.4MB/s), io=167MiB (175MB), run=5098-5098msec 00:45:32.544 00:45:32.544 Disk stats (read/write): 00:45:32.544 nvme0n1: ios=82528/42134, merge=0/0, ticks=474352/199027, in_queue=673379, util=98.63% 00:45:32.544 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:45:32.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:45:32.544 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:45:32.544 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:45:32.544 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:45:32.544 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:32.544 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:45:32.544 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:32.544 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:45:32.544 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:32.804 rmmod nvme_tcp 00:45:32.804 rmmod nvme_fabrics 00:45:32.804 rmmod nvme_keyring 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 68853 ']' 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 68853 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 68853 ']' 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 68853 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68853 00:45:32.804 killing process with pid 68853 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68853' 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 68853 00:45:32.804 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 68853 00:45:33.064 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:33.064 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:33.064 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:33.065 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:45:33.065 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:45:33.065 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:33.065 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:45:33.065 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:33.065 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:33.065 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:33.065 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:33.065 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:33.065 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:33.065 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:33.065 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:33.065 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:33.065 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:33.065 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:45:33.325 00:45:33.325 real 0m20.809s 00:45:33.325 user 1m20.992s 00:45:33.325 sys 0m7.164s 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:45:33.325 ************************************ 00:45:33.325 END TEST nvmf_target_multipath 00:45:33.325 ************************************ 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:45:33.325 ************************************ 00:45:33.325 START TEST nvmf_zcopy 00:45:33.325 ************************************ 00:45:33.325 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:45:33.586 * Looking for test storage... 00:45:33.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:33.586 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:33.586 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:45:33.586 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:33.586 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:33.586 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:33.586 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:33.586 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:33.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:33.587 --rc genhtml_branch_coverage=1 00:45:33.587 --rc genhtml_function_coverage=1 00:45:33.587 --rc genhtml_legend=1 00:45:33.587 --rc geninfo_all_blocks=1 00:45:33.587 --rc geninfo_unexecuted_blocks=1 00:45:33.587 00:45:33.587 ' 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:33.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:33.587 --rc genhtml_branch_coverage=1 00:45:33.587 --rc genhtml_function_coverage=1 00:45:33.587 --rc genhtml_legend=1 00:45:33.587 --rc geninfo_all_blocks=1 00:45:33.587 --rc geninfo_unexecuted_blocks=1 00:45:33.587 00:45:33.587 ' 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:33.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:33.587 --rc genhtml_branch_coverage=1 00:45:33.587 --rc genhtml_function_coverage=1 00:45:33.587 --rc genhtml_legend=1 00:45:33.587 --rc geninfo_all_blocks=1 00:45:33.587 --rc geninfo_unexecuted_blocks=1 00:45:33.587 00:45:33.587 ' 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:33.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:33.587 --rc genhtml_branch_coverage=1 00:45:33.587 --rc genhtml_function_coverage=1 00:45:33.587 --rc genhtml_legend=1 00:45:33.587 --rc geninfo_all_blocks=1 00:45:33.587 --rc geninfo_unexecuted_blocks=1 00:45:33.587 00:45:33.587 ' 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:33.587 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:33.588 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:33.588 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:33.848 Cannot find device "nvmf_init_br" 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:33.848 Cannot find device "nvmf_init_br2" 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:33.848 Cannot find device "nvmf_tgt_br" 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:33.848 Cannot find device "nvmf_tgt_br2" 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:33.848 Cannot find device "nvmf_init_br" 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:33.848 Cannot find device "nvmf_init_br2" 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:33.848 Cannot find device "nvmf_tgt_br" 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:33.848 Cannot find device "nvmf_tgt_br2" 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:33.848 Cannot find device "nvmf_br" 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:33.848 Cannot find device "nvmf_init_if" 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:33.848 Cannot find device "nvmf_init_if2" 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:33.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:33.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:33.848 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:34.108 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:34.109 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:34.109 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:34.109 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:34.109 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.161 ms 00:45:34.109 00:45:34.109 --- 10.0.0.3 ping statistics --- 00:45:34.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:34.109 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:34.109 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:34.109 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.113 ms 00:45:34.109 00:45:34.109 --- 10.0.0.4 ping statistics --- 00:45:34.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:34.109 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:34.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:34.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:45:34.109 00:45:34.109 --- 10.0.0.1 ping statistics --- 00:45:34.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:34.109 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:34.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:34.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:45:34.109 00:45:34.109 --- 10.0.0.2 ping statistics --- 00:45:34.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:34.109 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=69507 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 69507 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 69507 ']' 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:34.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:34.109 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:34.369 [2024-12-09 09:56:41.174443] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:45:34.369 [2024-12-09 09:56:41.174534] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:34.369 [2024-12-09 09:56:41.303903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:34.369 [2024-12-09 09:56:41.356722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:34.369 [2024-12-09 09:56:41.356839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:34.369 [2024-12-09 09:56:41.356873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:34.369 [2024-12-09 09:56:41.356898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:34.369 [2024-12-09 09:56:41.356916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:34.369 [2024-12-09 09:56:41.357221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:34.630 [2024-12-09 09:56:41.535064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:34.630 [2024-12-09 09:56:41.559136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:34.630 malloc0 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:34.630 { 00:45:34.630 "params": { 00:45:34.630 "name": "Nvme$subsystem", 00:45:34.630 "trtype": "$TEST_TRANSPORT", 00:45:34.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:34.630 "adrfam": "ipv4", 00:45:34.630 "trsvcid": "$NVMF_PORT", 00:45:34.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:34.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:34.630 "hdgst": ${hdgst:-false}, 00:45:34.630 "ddgst": ${ddgst:-false} 00:45:34.630 }, 00:45:34.630 "method": "bdev_nvme_attach_controller" 00:45:34.630 } 00:45:34.630 EOF 00:45:34.630 )") 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:45:34.630 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:45:34.631 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:45:34.631 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:34.631 "params": { 00:45:34.631 "name": "Nvme1", 00:45:34.631 "trtype": "tcp", 00:45:34.631 "traddr": "10.0.0.3", 00:45:34.631 "adrfam": "ipv4", 00:45:34.631 "trsvcid": "4420", 00:45:34.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:34.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:34.631 "hdgst": false, 00:45:34.631 "ddgst": false 00:45:34.631 }, 00:45:34.631 "method": "bdev_nvme_attach_controller" 00:45:34.631 }' 00:45:34.631 [2024-12-09 09:56:41.658850] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:45:34.631 [2024-12-09 09:56:41.658959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69544 ] 00:45:34.891 [2024-12-09 09:56:41.806385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:34.891 [2024-12-09 09:56:41.858259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:35.150 Running I/O for 10 seconds... 00:45:37.033 8293.00 IOPS, 64.79 MiB/s [2024-12-09T09:56:45.518Z] 8228.00 IOPS, 64.28 MiB/s [2024-12-09T09:56:46.100Z] 8295.33 IOPS, 64.81 MiB/s [2024-12-09T09:56:47.037Z] 8293.50 IOPS, 64.79 MiB/s [2024-12-09T09:56:48.422Z] 8255.00 IOPS, 64.49 MiB/s [2024-12-09T09:56:49.386Z] 8279.50 IOPS, 64.68 MiB/s [2024-12-09T09:56:50.359Z] 8286.71 IOPS, 64.74 MiB/s [2024-12-09T09:56:51.315Z] 8280.12 IOPS, 64.69 MiB/s [2024-12-09T09:56:52.251Z] 8295.11 IOPS, 64.81 MiB/s [2024-12-09T09:56:52.251Z] 8286.20 IOPS, 64.74 MiB/s 00:45:45.207 Latency(us) 00:45:45.207 [2024-12-09T09:56:52.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:45.207 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:45:45.207 Verification LBA range: start 0x0 length 0x1000 00:45:45.207 Nvme1n1 : 10.05 8254.57 64.49 0.00 0.00 15403.53 1988.97 43041.98 00:45:45.207 [2024-12-09T09:56:52.251Z] =================================================================================================================== 00:45:45.207 [2024-12-09T09:56:52.251Z] Total : 8254.57 64.49 0.00 0.00 15403.53 1988.97 43041.98 00:45:45.207 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69661 00:45:45.207 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:45:45.207 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:45:45.207 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:45:45.207 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:45.207 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:45.207 { 00:45:45.207 "params": { 00:45:45.207 "name": "Nvme$subsystem", 00:45:45.207 "trtype": "$TEST_TRANSPORT", 00:45:45.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:45.207 "adrfam": "ipv4", 00:45:45.207 "trsvcid": "$NVMF_PORT", 00:45:45.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:45.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:45.207 "hdgst": ${hdgst:-false}, 00:45:45.207 "ddgst": ${ddgst:-false} 00:45:45.207 }, 00:45:45.207 "method": "bdev_nvme_attach_controller" 00:45:45.207 } 00:45:45.207 EOF 00:45:45.207 )") 00:45:45.207 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:45:45.207 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:45:45.207 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:45.207 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:45:45.208 [2024-12-09 09:56:52.232586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.208 [2024-12-09 09:56:52.232623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.208 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:45:45.208 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:45:45.208 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:45.208 "params": { 00:45:45.208 "name": "Nvme1", 00:45:45.208 "trtype": "tcp", 00:45:45.208 "traddr": "10.0.0.3", 00:45:45.208 "adrfam": "ipv4", 00:45:45.208 "trsvcid": "4420", 00:45:45.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:45.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:45.208 "hdgst": false, 00:45:45.208 "ddgst": false 00:45:45.208 }, 00:45:45.208 "method": "bdev_nvme_attach_controller" 00:45:45.208 }' 00:45:45.208 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.208 [2024-12-09 09:56:52.244556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.208 [2024-12-09 09:56:52.244597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.208 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.466 [2024-12-09 09:56:52.256535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.466 [2024-12-09 09:56:52.256557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.466 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.466 [2024-12-09 09:56:52.268501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.466 [2024-12-09 09:56:52.268538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.466 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.466 [2024-12-09 09:56:52.276634] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:45:45.466 [2024-12-09 09:56:52.276731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69661 ] 00:45:45.466 [2024-12-09 09:56:52.280468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.466 [2024-12-09 09:56:52.280491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.466 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.466 [2024-12-09 09:56:52.292462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.466 [2024-12-09 09:56:52.292489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.466 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.466 [2024-12-09 09:56:52.304437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.466 [2024-12-09 09:56:52.304459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.466 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.466 [2024-12-09 09:56:52.316413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.466 [2024-12-09 09:56:52.316433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.328412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.328435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.340378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.340454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.352365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.352388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.364340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.364361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.380309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.380341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.392305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.392328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.404274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.404296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.416255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.416278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.423010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:45.467 [2024-12-09 09:56:52.428255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.428281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.440227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.440256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.452193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.452260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.464172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.464196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.476157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.476180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 [2024-12-09 09:56:52.477662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.488142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.488167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.467 [2024-12-09 09:56:52.500124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.467 [2024-12-09 09:56:52.500203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.467 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.726 [2024-12-09 09:56:52.512114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.726 [2024-12-09 09:56:52.512143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.726 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.726 [2024-12-09 09:56:52.524094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.726 [2024-12-09 09:56:52.524123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.726 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.726 [2024-12-09 09:56:52.536077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.726 [2024-12-09 09:56:52.536162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.726 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.726 [2024-12-09 09:56:52.548072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.726 [2024-12-09 09:56:52.548177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.726 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.726 [2024-12-09 09:56:52.560014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.726 [2024-12-09 09:56:52.560073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.726 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.726 [2024-12-09 09:56:52.572036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.726 [2024-12-09 09:56:52.572115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.726 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.726 [2024-12-09 09:56:52.584011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.726 [2024-12-09 09:56:52.584084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.726 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.726 [2024-12-09 09:56:52.595972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.726 [2024-12-09 09:56:52.596039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.726 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.726 [2024-12-09 09:56:52.607974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.726 [2024-12-09 09:56:52.608040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.726 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.726 [2024-12-09 09:56:52.619929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.726 [2024-12-09 09:56:52.619996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.726 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.726 [2024-12-09 09:56:52.631917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.726 [2024-12-09 09:56:52.631995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.727 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.727 Running I/O for 5 seconds... 00:45:45.727 [2024-12-09 09:56:52.643888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.727 [2024-12-09 09:56:52.643955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.727 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.727 [2024-12-09 09:56:52.661293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.727 [2024-12-09 09:56:52.661381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.727 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.727 [2024-12-09 09:56:52.677683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.727 [2024-12-09 09:56:52.677759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.727 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.727 [2024-12-09 09:56:52.693656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.727 [2024-12-09 09:56:52.693734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.727 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.727 [2024-12-09 09:56:52.707756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.727 [2024-12-09 09:56:52.707847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.727 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.727 [2024-12-09 09:56:52.722508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.727 [2024-12-09 09:56:52.722610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.727 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.727 [2024-12-09 09:56:52.736632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.727 [2024-12-09 09:56:52.736721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.727 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.727 [2024-12-09 09:56:52.754211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.727 [2024-12-09 09:56:52.754244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.727 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.769048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.769081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.784000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.784032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.797730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.797759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.812568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.812596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.828116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.828147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.842353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.842384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.857192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.857228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.872277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.872328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.886874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.886982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.901551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.901585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.916456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.916497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.930578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.930607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.945292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.945325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.959882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.959954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.973752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.973793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:52.988786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:52.988818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:53.000047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:53.000078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:45.986 [2024-12-09 09:56:53.014838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:45.986 [2024-12-09 09:56:53.014915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:45.986 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.245 [2024-12-09 09:56:53.030319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.245 [2024-12-09 09:56:53.030389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.245 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.245 [2024-12-09 09:56:53.046054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.245 [2024-12-09 09:56:53.046118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.245 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.245 [2024-12-09 09:56:53.061483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.245 [2024-12-09 09:56:53.061513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.245 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.245 [2024-12-09 09:56:53.077175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.245 [2024-12-09 09:56:53.077250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.245 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.245 [2024-12-09 09:56:53.091810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.245 [2024-12-09 09:56:53.091877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.245 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.245 [2024-12-09 09:56:53.106361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.245 [2024-12-09 09:56:53.106391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.245 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.245 [2024-12-09 09:56:53.117370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.245 [2024-12-09 09:56:53.117400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.245 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.245 [2024-12-09 09:56:53.132602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.245 [2024-12-09 09:56:53.132631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.245 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.245 [2024-12-09 09:56:53.143642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.245 [2024-12-09 09:56:53.143671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.245 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.245 [2024-12-09 09:56:53.159176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.245 [2024-12-09 09:56:53.159255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.245 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.245 [2024-12-09 09:56:53.174600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.245 [2024-12-09 09:56:53.174629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.245 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.245 [2024-12-09 09:56:53.188780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.245 [2024-12-09 09:56:53.188812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.246 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.246 [2024-12-09 09:56:53.203346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.246 [2024-12-09 09:56:53.203381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.246 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.246 [2024-12-09 09:56:53.215708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.246 [2024-12-09 09:56:53.215739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.246 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.246 [2024-12-09 09:56:53.230660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.246 [2024-12-09 09:56:53.230703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.246 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.246 [2024-12-09 09:56:53.241437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.246 [2024-12-09 09:56:53.241487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.246 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.246 [2024-12-09 09:56:53.257439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.246 [2024-12-09 09:56:53.257531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.246 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.246 [2024-12-09 09:56:53.277167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.246 [2024-12-09 09:56:53.277246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.246 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.292244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.292323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.304097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.304131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.314694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.314792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.330153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.330220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.345063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.345132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.359863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.359948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.377118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.377196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.392488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.392577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.408371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.408457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.424934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.425004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.441051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.441121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.452174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.452244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.468690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.468769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.484380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.484461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.499654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.499750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.516432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.516530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.528806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.528888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.505 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.505 [2024-12-09 09:56:53.544337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.505 [2024-12-09 09:56:53.544428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.560832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.560869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.576209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.576241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.591028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.591133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.606285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.606403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.616843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.616879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.631454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.631496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 15043.00 IOPS, 117.52 MiB/s [2024-12-09T09:56:53.809Z] [2024-12-09 09:56:53.645670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.645741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.657934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.657969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.673057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.673177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.684848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.684885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.699282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.699309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.712932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.712958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.727949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.727974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.743284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.743311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.757909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.757934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.769019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.769043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.783341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.783370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:46.765 [2024-12-09 09:56:53.798180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:46.765 [2024-12-09 09:56:53.798216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:46.765 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.024 [2024-12-09 09:56:53.809694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.024 [2024-12-09 09:56:53.809734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.024 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.024 [2024-12-09 09:56:53.825098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.024 [2024-12-09 09:56:53.825131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.024 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.024 [2024-12-09 09:56:53.840626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.024 [2024-12-09 09:56:53.840650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.024 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.024 [2024-12-09 09:56:53.855436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.024 [2024-12-09 09:56:53.855463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.024 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.024 [2024-12-09 09:56:53.869620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.024 [2024-12-09 09:56:53.869645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.024 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.024 [2024-12-09 09:56:53.884074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.024 [2024-12-09 09:56:53.884099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.024 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.024 [2024-12-09 09:56:53.898183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.024 [2024-12-09 09:56:53.898207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.024 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.024 [2024-12-09 09:56:53.911755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.024 [2024-12-09 09:56:53.911781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.024 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.024 [2024-12-09 09:56:53.926407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.024 [2024-12-09 09:56:53.926433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.024 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.024 [2024-12-09 09:56:53.939856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.024 [2024-12-09 09:56:53.939882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.025 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.025 [2024-12-09 09:56:53.954306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.025 [2024-12-09 09:56:53.954330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.025 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.025 [2024-12-09 09:56:53.968202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.025 [2024-12-09 09:56:53.968228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.025 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.025 [2024-12-09 09:56:53.982764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.025 [2024-12-09 09:56:53.982789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.025 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.025 [2024-12-09 09:56:53.996955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.025 [2024-12-09 09:56:53.996981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.025 2024/12/09 09:56:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.025 [2024-12-09 09:56:54.011613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.025 [2024-12-09 09:56:54.011637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.025 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.025 [2024-12-09 09:56:54.025982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.025 [2024-12-09 09:56:54.026009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.025 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.025 [2024-12-09 09:56:54.037731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.025 [2024-12-09 09:56:54.037756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.025 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.025 [2024-12-09 09:56:54.053558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.025 [2024-12-09 09:56:54.053584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.025 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.283 [2024-12-09 09:56:54.069034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.283 [2024-12-09 09:56:54.069060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.083524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.083562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.097914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.097940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.111679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.111704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.126858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.126888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.142082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.142108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.156264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.156289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.169821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.169844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.184788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.184830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.201383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.201426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.218078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.218108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.234330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.234368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.249974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.250007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.264403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.264451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.275286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.275321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.289840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.289868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.303505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.303534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.284 [2024-12-09 09:56:54.317968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.284 [2024-12-09 09:56:54.317990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.284 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.332998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.333023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.348752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.348778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.363512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.363540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.374374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.374399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.390889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.390940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.407155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.407199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.418981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.419017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.434159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.434192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.448847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.448878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.464120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.464149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.474908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.474935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.490100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.490126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.503973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.503998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.518555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.518579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.533834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.533866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.549782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.549811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.566333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.566364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.582827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.582858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.576 [2024-12-09 09:56:54.600256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.576 [2024-12-09 09:56:54.600290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.576 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.616954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.616991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.633548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.633578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 15252.00 IOPS, 119.16 MiB/s [2024-12-09T09:56:54.881Z] [2024-12-09 09:56:54.649255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.649288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.663851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.663882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.678840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.678871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.690750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.690780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.706000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.706031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.720850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.720898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.735555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.735583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.747519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.747546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.762597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.762623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.778190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.778217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.793273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.793299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.807944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.807972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.819639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.819668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.835800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.835828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.850663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.850692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.837 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:47.837 [2024-12-09 09:56:54.865756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:47.837 [2024-12-09 09:56:54.865783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:47.838 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.096 [2024-12-09 09:56:54.880499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.096 [2024-12-09 09:56:54.880524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.096 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.096 [2024-12-09 09:56:54.894466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.096 [2024-12-09 09:56:54.894500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.096 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.096 [2024-12-09 09:56:54.909487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.096 [2024-12-09 09:56:54.909510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.096 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.096 [2024-12-09 09:56:54.924322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.096 [2024-12-09 09:56:54.924349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.096 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.096 [2024-12-09 09:56:54.936218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.096 [2024-12-09 09:56:54.936248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.096 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.096 [2024-12-09 09:56:54.951070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.096 [2024-12-09 09:56:54.951102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.096 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.096 [2024-12-09 09:56:54.966257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.096 [2024-12-09 09:56:54.966283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.097 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.097 [2024-12-09 09:56:54.977792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.097 [2024-12-09 09:56:54.977818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.097 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.097 [2024-12-09 09:56:54.993415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.097 [2024-12-09 09:56:54.993442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.097 2024/12/09 09:56:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.097 [2024-12-09 09:56:55.008560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.097 [2024-12-09 09:56:55.008586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.097 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.097 [2024-12-09 09:56:55.023252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.097 [2024-12-09 09:56:55.023282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.097 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.097 [2024-12-09 09:56:55.037901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.097 [2024-12-09 09:56:55.037928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.097 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.097 [2024-12-09 09:56:55.053059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.097 [2024-12-09 09:56:55.053088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.097 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.097 [2024-12-09 09:56:55.068575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.097 [2024-12-09 09:56:55.068604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.097 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.097 [2024-12-09 09:56:55.083105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.097 [2024-12-09 09:56:55.083137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.097 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.097 [2024-12-09 09:56:55.098656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.097 [2024-12-09 09:56:55.098681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.097 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.097 [2024-12-09 09:56:55.112598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.097 [2024-12-09 09:56:55.112628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.097 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.097 [2024-12-09 09:56:55.127761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.097 [2024-12-09 09:56:55.127792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.097 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.142666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.142692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.154236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.154261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.170033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.170060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.185904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.185930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.197665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.197692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.213327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.213353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.229332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.229359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.245019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.245046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.260646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.260670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.275609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.275637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.287429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.287459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.303811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.303840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.318949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.318976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.333346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.333372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.347083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.347109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.361920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.361946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.373376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.373402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.355 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.355 [2024-12-09 09:56:55.388973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.355 [2024-12-09 09:56:55.389002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.356 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.405389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.405423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.614 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.416967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.416998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.614 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.434256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.434286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.614 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.450112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.450143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.614 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.473983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.474018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.614 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.488798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.488829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.614 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.503662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.503698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.614 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.514749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.514782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.614 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.530307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.530340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.614 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.545591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.545625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.614 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.559852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.559898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.614 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.574084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.574117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.614 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.614 [2024-12-09 09:56:55.588865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.614 [2024-12-09 09:56:55.588905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.615 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.615 [2024-12-09 09:56:55.600084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.615 [2024-12-09 09:56:55.600130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.615 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.615 [2024-12-09 09:56:55.614810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.615 [2024-12-09 09:56:55.614840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.615 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.615 [2024-12-09 09:56:55.628182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.615 [2024-12-09 09:56:55.628218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.615 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.615 15212.67 IOPS, 118.85 MiB/s [2024-12-09T09:56:55.659Z] [2024-12-09 09:56:55.643663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.615 [2024-12-09 09:56:55.643702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.615 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.659873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.659927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.671304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.671345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.686124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.686167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.698173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.698215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.714309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.714347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.729929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.729966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.745550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.745596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.761700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.761748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.773356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.773391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.788137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.788171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.803032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.803070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.819101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.819144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.830603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.830633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.845045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.845080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.860032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.860068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.876 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.876 [2024-12-09 09:56:55.875397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.876 [2024-12-09 09:56:55.875431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.877 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.877 [2024-12-09 09:56:55.889703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.877 [2024-12-09 09:56:55.889735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.877 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.877 [2024-12-09 09:56:55.904471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.877 [2024-12-09 09:56:55.904533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:48.877 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:48.877 [2024-12-09 09:56:55.916007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:48.877 [2024-12-09 09:56:55.916043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:55.932280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:55.932313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:55.947064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:55.947097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:55.961760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:55.961797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:55.977809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:55.977844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:55.991772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:55.991806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.007009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.007044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.022317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.022349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.036659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.036688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.051245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.051278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.063146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.063180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.078163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.078195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.089844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.089876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.104973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.105007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.117075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.117107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.132172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.132206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.148410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.148442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.160207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.160240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.136 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.136 [2024-12-09 09:56:56.174611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.136 [2024-12-09 09:56:56.174645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.395 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.395 [2024-12-09 09:56:56.189792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.395 [2024-12-09 09:56:56.189825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.395 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.395 [2024-12-09 09:56:56.201731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.395 [2024-12-09 09:56:56.201770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.395 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.217747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.217776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.233252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.233284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.247870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.247902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.261759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.261803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.276379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.276413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.290160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.290195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.305302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.305339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.318992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.319023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.334051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.334080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.349735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.349765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.364286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.364317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.378374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.378408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.393800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.393831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.409221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.409254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.396 [2024-12-09 09:56:56.423437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.396 [2024-12-09 09:56:56.423470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.396 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.438640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.438676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.456095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.456136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.473077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.473113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.488460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.488507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.503329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.503368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.514167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.514201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.529106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.529141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.543245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.543278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.558320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.558353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.569211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.569242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.584327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.584363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.595240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.595272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.610526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.610555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.622158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.622194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 15157.50 IOPS, 118.42 MiB/s [2024-12-09T09:56:56.700Z] [2024-12-09 09:56:56.637659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.637686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.653872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.653908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.665455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.665514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.680644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.680677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.656 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.656 [2024-12-09 09:56:56.694697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.656 [2024-12-09 09:56:56.694733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.915 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.915 [2024-12-09 09:56:56.709951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.915 [2024-12-09 09:56:56.709991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.915 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.915 [2024-12-09 09:56:56.721847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.915 [2024-12-09 09:56:56.721889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.915 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.915 [2024-12-09 09:56:56.737132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.915 [2024-12-09 09:56:56.737169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.915 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.915 [2024-12-09 09:56:56.753777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.915 [2024-12-09 09:56:56.753817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.915 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.915 [2024-12-09 09:56:56.770189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.770232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.916 [2024-12-09 09:56:56.786039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.786077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.916 [2024-12-09 09:56:56.800776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.800806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.916 [2024-12-09 09:56:56.816550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.816584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.916 [2024-12-09 09:56:56.832813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.832848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.916 [2024-12-09 09:56:56.846698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.846732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.916 [2024-12-09 09:56:56.861530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.861565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.916 [2024-12-09 09:56:56.875045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.875076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.916 [2024-12-09 09:56:56.889231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.889263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.916 [2024-12-09 09:56:56.903749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.903781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.916 [2024-12-09 09:56:56.918926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.918958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.916 [2024-12-09 09:56:56.935266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.935303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:49.916 [2024-12-09 09:56:56.946760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:49.916 [2024-12-09 09:56:56.946793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:49.916 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:56.962158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:56.962196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:56.978508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:56.978543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:56.994565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:56.994598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.009838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.009874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.023578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.023614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.038953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.038984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.054641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.054674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.069411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.069446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.080913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.080945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.095652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.095686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.110209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.110243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.125863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.125905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.141027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.141063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.152111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.152147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.166585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.166624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.181743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.181792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.196390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.196433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.175 [2024-12-09 09:56:57.210818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.175 [2024-12-09 09:56:57.210851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.175 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.225265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.225297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.238552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.238584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.253959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.253994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.269853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.269888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.284974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.285009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.301134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.301177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.315650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.315688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.331475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.331522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.347231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.347267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.361802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.361833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.376446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.376491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.391938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.391972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.408305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.408339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.424488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.424543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.439117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.439170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.453462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.453507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.468473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.468515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.480 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.480 [2024-12-09 09:56:57.480298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.480 [2024-12-09 09:56:57.480330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.481 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.481 [2024-12-09 09:56:57.495934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.481 [2024-12-09 09:56:57.495969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.481 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.743 [2024-12-09 09:56:57.511439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.743 [2024-12-09 09:56:57.511470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.743 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.743 [2024-12-09 09:56:57.525269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.743 [2024-12-09 09:56:57.525299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.743 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.743 [2024-12-09 09:56:57.540637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.743 [2024-12-09 09:56:57.540670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.743 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.743 [2024-12-09 09:56:57.556089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.743 [2024-12-09 09:56:57.556129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.743 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.743 [2024-12-09 09:56:57.570790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.743 [2024-12-09 09:56:57.570835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.743 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.743 [2024-12-09 09:56:57.582860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.743 [2024-12-09 09:56:57.582923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.743 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.743 [2024-12-09 09:56:57.597549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.743 [2024-12-09 09:56:57.597580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.743 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.743 [2024-12-09 09:56:57.612299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.743 [2024-12-09 09:56:57.612331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.743 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.743 [2024-12-09 09:56:57.622870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.743 [2024-12-09 09:56:57.622906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 15146.60 IOPS, 118.33 MiB/s [2024-12-09T09:56:57.788Z] [2024-12-09 09:56:57.638160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.638188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 00:45:50.744 Latency(us) 00:45:50.744 [2024-12-09T09:56:57.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:50.744 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:45:50.744 Nvme1n1 : 5.01 15151.80 118.37 0.00 0.00 8439.37 3806.24 23123.62 00:45:50.744 [2024-12-09T09:56:57.788Z] =================================================================================================================== 00:45:50.744 [2024-12-09T09:56:57.788Z] Total : 15151.80 118.37 0.00 0.00 8439.37 3806.24 23123.62 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 [2024-12-09 09:56:57.649966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.649992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 [2024-12-09 09:56:57.661945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.661973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 [2024-12-09 09:56:57.673926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.673958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 [2024-12-09 09:56:57.685896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.685921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 [2024-12-09 09:56:57.697878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.697903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 [2024-12-09 09:56:57.709860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.709888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 [2024-12-09 09:56:57.721832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.721860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 [2024-12-09 09:56:57.733820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.733844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 [2024-12-09 09:56:57.745792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.745818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 [2024-12-09 09:56:57.757769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.757789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 [2024-12-09 09:56:57.769757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.769780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:50.744 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:50.744 [2024-12-09 09:56:57.781742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:50.744 [2024-12-09 09:56:57.781769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:51.003 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:51.003 [2024-12-09 09:56:57.793714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:51.003 [2024-12-09 09:56:57.793737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:51.003 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:51.003 [2024-12-09 09:56:57.805694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:51.003 [2024-12-09 09:56:57.805717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:51.003 2024/12/09 09:56:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:51.003 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69661) - No such process 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69661 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:51.003 delay0 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.003 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:45:51.003 [2024-12-09 09:56:58.032190] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:45:57.566 Initializing NVMe Controllers 00:45:57.566 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:45:57.566 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:45:57.566 Initialization complete. Launching workers. 00:45:57.566 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 108 00:45:57.566 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 395, failed to submit 33 00:45:57.566 success 211, unsuccessful 184, failed 0 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:57.566 rmmod nvme_tcp 00:45:57.566 rmmod nvme_fabrics 00:45:57.566 rmmod nvme_keyring 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 69507 ']' 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 69507 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 69507 ']' 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 69507 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69507 00:45:57.566 killing process with pid 69507 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69507' 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 69507 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 69507 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:57.566 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:45:57.825 ************************************ 00:45:57.825 END TEST nvmf_zcopy 00:45:57.825 ************************************ 00:45:57.825 00:45:57.825 real 0m24.323s 00:45:57.825 user 0m40.206s 00:45:57.825 sys 0m6.088s 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:45:57.825 ************************************ 00:45:57.825 START TEST nvmf_nmic 00:45:57.825 ************************************ 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:45:57.825 * Looking for test storage... 00:45:57.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:45:57.825 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:58.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:58.085 --rc genhtml_branch_coverage=1 00:45:58.085 --rc genhtml_function_coverage=1 00:45:58.085 --rc genhtml_legend=1 00:45:58.085 --rc geninfo_all_blocks=1 00:45:58.085 --rc geninfo_unexecuted_blocks=1 00:45:58.085 00:45:58.085 ' 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:58.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:58.085 --rc genhtml_branch_coverage=1 00:45:58.085 --rc genhtml_function_coverage=1 00:45:58.085 --rc genhtml_legend=1 00:45:58.085 --rc geninfo_all_blocks=1 00:45:58.085 --rc geninfo_unexecuted_blocks=1 00:45:58.085 00:45:58.085 ' 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:58.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:58.085 --rc genhtml_branch_coverage=1 00:45:58.085 --rc genhtml_function_coverage=1 00:45:58.085 --rc genhtml_legend=1 00:45:58.085 --rc geninfo_all_blocks=1 00:45:58.085 --rc geninfo_unexecuted_blocks=1 00:45:58.085 00:45:58.085 ' 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:58.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:58.085 --rc genhtml_branch_coverage=1 00:45:58.085 --rc genhtml_function_coverage=1 00:45:58.085 --rc genhtml_legend=1 00:45:58.085 --rc geninfo_all_blocks=1 00:45:58.085 --rc geninfo_unexecuted_blocks=1 00:45:58.085 00:45:58.085 ' 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:58.085 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:58.086 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:58.086 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:58.086 Cannot find device "nvmf_init_br" 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:58.086 Cannot find device "nvmf_init_br2" 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:58.086 Cannot find device "nvmf_tgt_br" 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:58.086 Cannot find device "nvmf_tgt_br2" 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:58.086 Cannot find device "nvmf_init_br" 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:58.086 Cannot find device "nvmf_init_br2" 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:58.086 Cannot find device "nvmf_tgt_br" 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:58.086 Cannot find device "nvmf_tgt_br2" 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:45:58.086 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:58.344 Cannot find device "nvmf_br" 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:58.344 Cannot find device "nvmf_init_if" 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:58.344 Cannot find device "nvmf_init_if2" 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:58.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:58.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:58.344 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:58.603 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:58.603 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:45:58.603 00:45:58.603 --- 10.0.0.3 ping statistics --- 00:45:58.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:58.603 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:58.603 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:58.603 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.105 ms 00:45:58.603 00:45:58.603 --- 10.0.0.4 ping statistics --- 00:45:58.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:58.603 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:58.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:58.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:45:58.603 00:45:58.603 --- 10.0.0.1 ping statistics --- 00:45:58.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:58.603 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:58.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:58.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:45:58.603 00:45:58.603 --- 10.0.0.2 ping statistics --- 00:45:58.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:58.603 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=70036 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 70036 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 70036 ']' 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:58.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:58.603 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:58.603 [2024-12-09 09:57:05.520607] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:45:58.603 [2024-12-09 09:57:05.520680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:58.862 [2024-12-09 09:57:05.674856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:58.862 [2024-12-09 09:57:05.733191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:58.862 [2024-12-09 09:57:05.733237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:58.862 [2024-12-09 09:57:05.733245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:58.862 [2024-12-09 09:57:05.733250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:58.862 [2024-12-09 09:57:05.733255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:58.862 [2024-12-09 09:57:05.734107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:58.862 [2024-12-09 09:57:05.734346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:58.862 [2024-12-09 09:57:05.734658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:58.862 [2024-12-09 09:57:05.734670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:59.429 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:59.429 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:45:59.429 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:59.429 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:59.429 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:59.688 [2024-12-09 09:57:06.523269] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:59.688 Malloc0 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:59.688 [2024-12-09 09:57:06.594549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.688 test case1: single bdev can't be used in multiple subsystems 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:59.688 [2024-12-09 09:57:06.630368] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:45:59.688 [2024-12-09 09:57:06.630404] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:45:59.688 [2024-12-09 09:57:06.630412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:59.688 2024/12/09 09:57:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:59.688 request: 00:45:59.688 { 00:45:59.688 "method": "nvmf_subsystem_add_ns", 00:45:59.688 "params": { 00:45:59.688 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:45:59.688 "namespace": { 00:45:59.688 "bdev_name": "Malloc0", 00:45:59.688 "no_auto_visible": false, 00:45:59.688 "hide_metadata": false 00:45:59.688 } 00:45:59.688 } 00:45:59.688 } 00:45:59.688 Got JSON-RPC error response 00:45:59.688 GoRPCClient: error on JSON-RPC call 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:45:59.688 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:45:59.688 Adding namespace failed - expected result. 00:45:59.689 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:45:59.689 test case2: host connect to nvmf target in multiple paths 00:45:59.689 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:45:59.689 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:45:59.689 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.689 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:59.689 [2024-12-09 09:57:06.646468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:45:59.689 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.689 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:45:59.948 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:46:00.207 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:46:00.207 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:46:00.207 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:46:00.207 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:46:00.207 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:46:02.107 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:46:02.107 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:46:02.107 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:46:02.107 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:46:02.107 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:46:02.107 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:46:02.107 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:46:02.107 [global] 00:46:02.107 thread=1 00:46:02.107 invalidate=1 00:46:02.107 rw=write 00:46:02.107 time_based=1 00:46:02.107 runtime=1 00:46:02.107 ioengine=libaio 00:46:02.107 direct=1 00:46:02.107 bs=4096 00:46:02.107 iodepth=1 00:46:02.107 norandommap=0 00:46:02.107 numjobs=1 00:46:02.107 00:46:02.107 verify_dump=1 00:46:02.107 verify_backlog=512 00:46:02.107 verify_state_save=0 00:46:02.107 do_verify=1 00:46:02.107 verify=crc32c-intel 00:46:02.107 [job0] 00:46:02.107 filename=/dev/nvme0n1 00:46:02.107 Could not set queue depth (nvme0n1) 00:46:02.366 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:02.366 fio-3.35 00:46:02.366 Starting 1 thread 00:46:03.300 00:46:03.300 job0: (groupid=0, jobs=1): err= 0: pid=70146: Mon Dec 9 09:57:10 2024 00:46:03.300 read: IOPS=4448, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1001msec) 00:46:03.300 slat (usec): min=6, max=104, avg=10.07, stdev= 5.85 00:46:03.300 clat (usec): min=82, max=338, avg=111.43, stdev=13.65 00:46:03.300 lat (usec): min=89, max=348, avg=121.50, stdev=16.41 00:46:03.300 clat percentiles (usec): 00:46:03.300 | 1.00th=[ 90], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 102], 00:46:03.300 | 30.00th=[ 105], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 113], 00:46:03.300 | 70.00th=[ 116], 80.00th=[ 119], 90.00th=[ 128], 95.00th=[ 137], 00:46:03.300 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 227], 00:46:03.300 | 99.99th=[ 338] 00:46:03.300 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:46:03.300 slat (usec): min=9, max=203, avg=14.02, stdev= 6.52 00:46:03.300 clat (usec): min=61, max=230, avg=83.39, stdev= 9.63 00:46:03.300 lat (usec): min=71, max=433, avg=97.40, stdev=13.27 00:46:03.300 clat percentiles (usec): 00:46:03.300 | 1.00th=[ 67], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 77], 00:46:03.300 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 85], 00:46:03.300 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 95], 95.00th=[ 100], 00:46:03.300 | 99.00th=[ 114], 99.50th=[ 120], 99.90th=[ 147], 99.95th=[ 176], 00:46:03.300 | 99.99th=[ 231] 00:46:03.300 bw ( KiB/s): min=20439, max=20439, per=100.00%, avg=20439.00, stdev= 0.00, samples=1 00:46:03.301 iops : min= 5109, max= 5109, avg=5109.00, stdev= 0.00, samples=1 00:46:03.301 lat (usec) : 100=55.83%, 250=44.15%, 500=0.02% 00:46:03.301 cpu : usr=1.80%, sys=8.60%, ctx=9061, majf=0, minf=5 00:46:03.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:03.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.301 issued rwts: total=4453,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:03.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:03.301 00:46:03.301 Run status group 0 (all jobs): 00:46:03.301 READ: bw=17.4MiB/s (18.2MB/s), 17.4MiB/s-17.4MiB/s (18.2MB/s-18.2MB/s), io=17.4MiB (18.2MB), run=1001-1001msec 00:46:03.301 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:46:03.301 00:46:03.301 Disk stats (read/write): 00:46:03.301 nvme0n1: ios=4114/4096, merge=0/0, ticks=485/374, in_queue=859, util=91.28% 00:46:03.559 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:03.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:03.560 rmmod nvme_tcp 00:46:03.560 rmmod nvme_fabrics 00:46:03.560 rmmod nvme_keyring 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 70036 ']' 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 70036 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 70036 ']' 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 70036 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70036 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:03.560 killing process with pid 70036 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70036' 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 70036 00:46:03.560 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 70036 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:46:03.820 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:46:04.083 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:46:04.083 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:46:04.083 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:04.083 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:04.083 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:46:04.083 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:04.083 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:04.083 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:04.083 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:46:04.083 00:46:04.083 real 0m6.313s 00:46:04.083 user 0m20.148s 00:46:04.083 sys 0m1.451s 00:46:04.083 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:04.083 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:04.083 ************************************ 00:46:04.083 END TEST nvmf_nmic 00:46:04.083 ************************************ 00:46:04.083 09:57:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:46:04.083 09:57:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:04.083 09:57:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:04.083 09:57:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:46:04.354 ************************************ 00:46:04.354 START TEST nvmf_fio_target 00:46:04.354 ************************************ 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:46:04.354 * Looking for test storage... 00:46:04.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:04.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:04.354 --rc genhtml_branch_coverage=1 00:46:04.354 --rc genhtml_function_coverage=1 00:46:04.354 --rc genhtml_legend=1 00:46:04.354 --rc geninfo_all_blocks=1 00:46:04.354 --rc geninfo_unexecuted_blocks=1 00:46:04.354 00:46:04.354 ' 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:04.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:04.354 --rc genhtml_branch_coverage=1 00:46:04.354 --rc genhtml_function_coverage=1 00:46:04.354 --rc genhtml_legend=1 00:46:04.354 --rc geninfo_all_blocks=1 00:46:04.354 --rc geninfo_unexecuted_blocks=1 00:46:04.354 00:46:04.354 ' 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:04.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:04.354 --rc genhtml_branch_coverage=1 00:46:04.354 --rc genhtml_function_coverage=1 00:46:04.354 --rc genhtml_legend=1 00:46:04.354 --rc geninfo_all_blocks=1 00:46:04.354 --rc geninfo_unexecuted_blocks=1 00:46:04.354 00:46:04.354 ' 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:04.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:04.354 --rc genhtml_branch_coverage=1 00:46:04.354 --rc genhtml_function_coverage=1 00:46:04.354 --rc genhtml_legend=1 00:46:04.354 --rc geninfo_all_blocks=1 00:46:04.354 --rc geninfo_unexecuted_blocks=1 00:46:04.354 00:46:04.354 ' 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:04.354 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:04.355 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:04.355 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:46:04.633 Cannot find device "nvmf_init_br" 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:46:04.633 Cannot find device "nvmf_init_br2" 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:46:04.633 Cannot find device "nvmf_tgt_br" 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:46:04.633 Cannot find device "nvmf_tgt_br2" 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:46:04.633 Cannot find device "nvmf_init_br" 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:46:04.633 Cannot find device "nvmf_init_br2" 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:46:04.633 Cannot find device "nvmf_tgt_br" 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:46:04.633 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:46:04.633 Cannot find device "nvmf_tgt_br2" 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:46:04.634 Cannot find device "nvmf_br" 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:46:04.634 Cannot find device "nvmf_init_if" 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:46:04.634 Cannot find device "nvmf_init_if2" 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:04.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:04.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:04.634 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:46:04.901 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:04.901 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:46:04.901 00:46:04.901 --- 10.0.0.3 ping statistics --- 00:46:04.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:04.901 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:46:04.901 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:46:04.901 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.120 ms 00:46:04.901 00:46:04.901 --- 10.0.0.4 ping statistics --- 00:46:04.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:04.901 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:04.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:04.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:46:04.901 00:46:04.901 --- 10.0.0.1 ping statistics --- 00:46:04.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:04.901 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:46:04.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:04.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:46:04.901 00:46:04.901 --- 10.0.0.2 ping statistics --- 00:46:04.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:04.901 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=70385 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 70385 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 70385 ']' 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:04.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:04.901 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:05.170 [2024-12-09 09:57:11.986470] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:46:05.170 [2024-12-09 09:57:11.986559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:05.170 [2024-12-09 09:57:12.137421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:05.170 [2024-12-09 09:57:12.192507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:05.170 [2024-12-09 09:57:12.192560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:05.170 [2024-12-09 09:57:12.192582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:05.170 [2024-12-09 09:57:12.192588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:05.170 [2024-12-09 09:57:12.192592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:05.170 [2024-12-09 09:57:12.193827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:05.170 [2024-12-09 09:57:12.193923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:05.170 [2024-12-09 09:57:12.194035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:05.170 [2024-12-09 09:57:12.194041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:06.139 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:06.139 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:46:06.139 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:06.139 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:06.139 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:06.139 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:06.139 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:46:06.139 [2024-12-09 09:57:13.137980] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:06.139 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:06.397 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:46:06.397 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:06.659 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:46:06.659 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:06.919 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:46:06.919 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:07.179 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:46:07.179 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:46:07.439 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:07.698 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:46:07.698 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:07.957 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:46:07.957 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:08.223 09:57:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:46:08.223 09:57:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:46:08.485 09:57:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:46:08.744 09:57:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:46:08.744 09:57:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:09.003 09:57:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:46:09.003 09:57:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:46:09.261 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:46:09.520 [2024-12-09 09:57:16.313896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:09.520 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:46:09.520 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:46:09.778 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:46:10.052 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:46:10.052 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:46:10.052 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:46:10.052 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:46:10.052 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:46:10.052 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:46:11.959 09:57:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:46:11.959 09:57:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:46:11.959 09:57:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:46:11.959 09:57:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:46:11.959 09:57:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:46:11.959 09:57:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:46:11.959 09:57:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:46:12.218 [global] 00:46:12.218 thread=1 00:46:12.218 invalidate=1 00:46:12.218 rw=write 00:46:12.218 time_based=1 00:46:12.218 runtime=1 00:46:12.218 ioengine=libaio 00:46:12.218 direct=1 00:46:12.218 bs=4096 00:46:12.218 iodepth=1 00:46:12.218 norandommap=0 00:46:12.218 numjobs=1 00:46:12.218 00:46:12.218 verify_dump=1 00:46:12.218 verify_backlog=512 00:46:12.218 verify_state_save=0 00:46:12.218 do_verify=1 00:46:12.218 verify=crc32c-intel 00:46:12.218 [job0] 00:46:12.218 filename=/dev/nvme0n1 00:46:12.218 [job1] 00:46:12.218 filename=/dev/nvme0n2 00:46:12.218 [job2] 00:46:12.218 filename=/dev/nvme0n3 00:46:12.218 [job3] 00:46:12.218 filename=/dev/nvme0n4 00:46:12.218 Could not set queue depth (nvme0n1) 00:46:12.218 Could not set queue depth (nvme0n2) 00:46:12.218 Could not set queue depth (nvme0n3) 00:46:12.218 Could not set queue depth (nvme0n4) 00:46:12.218 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:12.218 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:12.218 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:12.218 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:12.218 fio-3.35 00:46:12.218 Starting 4 threads 00:46:13.599 00:46:13.599 job0: (groupid=0, jobs=1): err= 0: pid=70676: Mon Dec 9 09:57:20 2024 00:46:13.599 read: IOPS=3372, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:46:13.599 slat (nsec): min=6918, max=32569, avg=9582.05, stdev=2476.15 00:46:13.599 clat (usec): min=119, max=481, avg=145.63, stdev=11.93 00:46:13.599 lat (usec): min=126, max=489, avg=155.21, stdev=12.62 00:46:13.599 clat percentiles (usec): 00:46:13.599 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:46:13.599 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:46:13.599 | 70.00th=[ 151], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:46:13.599 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 200], 99.95th=[ 258], 00:46:13.599 | 99.99th=[ 482] 00:46:13.599 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:46:13.599 slat (usec): min=11, max=106, avg=14.91, stdev= 7.12 00:46:13.599 clat (usec): min=86, max=603, avg=115.67, stdev=15.05 00:46:13.599 lat (usec): min=97, max=616, avg=130.58, stdev=17.65 00:46:13.599 clat percentiles (usec): 00:46:13.599 | 1.00th=[ 97], 5.00th=[ 102], 10.00th=[ 104], 20.00th=[ 108], 00:46:13.599 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 115], 60.00th=[ 117], 00:46:13.599 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 129], 95.00th=[ 135], 00:46:13.599 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 239], 99.95th=[ 433], 00:46:13.599 | 99.99th=[ 603] 00:46:13.599 bw ( KiB/s): min=15760, max=15760, per=35.01%, avg=15760.00, stdev= 0.00, samples=1 00:46:13.599 iops : min= 3940, max= 3940, avg=3940.00, stdev= 0.00, samples=1 00:46:13.599 lat (usec) : 100=1.44%, 250=98.49%, 500=0.06%, 750=0.01% 00:46:13.599 cpu : usr=1.40%, sys=6.80%, ctx=6961, majf=0, minf=5 00:46:13.599 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:13.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:13.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:13.599 issued rwts: total=3376,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:13.599 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:13.599 job1: (groupid=0, jobs=1): err= 0: pid=70677: Mon Dec 9 09:57:20 2024 00:46:13.599 read: IOPS=3077, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:46:13.599 slat (nsec): min=5920, max=26641, avg=9014.16, stdev=1679.79 00:46:13.599 clat (usec): min=120, max=295, avg=154.81, stdev=12.56 00:46:13.599 lat (usec): min=127, max=301, avg=163.82, stdev=12.69 00:46:13.599 clat percentiles (usec): 00:46:13.599 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:46:13.599 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:46:13.599 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 176], 00:46:13.599 | 99.00th=[ 188], 99.50th=[ 200], 99.90th=[ 273], 99.95th=[ 285], 00:46:13.599 | 99.99th=[ 297] 00:46:13.599 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:46:13.599 slat (usec): min=10, max=114, avg=14.49, stdev= 6.47 00:46:13.599 clat (usec): min=90, max=712, avg=121.68, stdev=15.74 00:46:13.599 lat (usec): min=102, max=727, avg=136.17, stdev=17.83 00:46:13.599 clat percentiles (usec): 00:46:13.599 | 1.00th=[ 102], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 113], 00:46:13.599 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 123], 00:46:13.599 | 70.00th=[ 126], 80.00th=[ 130], 90.00th=[ 137], 95.00th=[ 143], 00:46:13.599 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 253], 99.95th=[ 302], 00:46:13.599 | 99.99th=[ 717] 00:46:13.599 bw ( KiB/s): min=14416, max=14416, per=32.03%, avg=14416.00, stdev= 0.00, samples=1 00:46:13.599 iops : min= 3604, max= 3604, avg=3604.00, stdev= 0.00, samples=1 00:46:13.599 lat (usec) : 100=0.33%, 250=99.49%, 500=0.17%, 750=0.02% 00:46:13.599 cpu : usr=1.30%, sys=6.20%, ctx=6665, majf=0, minf=15 00:46:13.599 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:13.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:13.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:13.599 issued rwts: total=3081,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:13.599 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:13.599 job2: (groupid=0, jobs=1): err= 0: pid=70678: Mon Dec 9 09:57:20 2024 00:46:13.599 read: IOPS=1577, BW=6310KiB/s (6461kB/s)(6316KiB/1001msec) 00:46:13.599 slat (usec): min=8, max=189, avg=18.30, stdev=11.22 00:46:13.599 clat (usec): min=140, max=737, avg=290.55, stdev=50.56 00:46:13.599 lat (usec): min=165, max=772, avg=308.85, stdev=54.81 00:46:13.599 clat percentiles (usec): 00:46:13.599 | 1.00th=[ 161], 5.00th=[ 243], 10.00th=[ 255], 20.00th=[ 265], 00:46:13.599 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:46:13.599 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 371], 95.00th=[ 404], 00:46:13.599 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[ 478], 99.95th=[ 742], 00:46:13.599 | 99.99th=[ 742] 00:46:13.599 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:46:13.599 slat (usec): min=14, max=134, avg=26.98, stdev=12.11 00:46:13.599 clat (usec): min=122, max=547, avg=219.88, stdev=29.51 00:46:13.599 lat (usec): min=159, max=563, avg=246.86, stdev=27.05 00:46:13.599 clat percentiles (usec): 00:46:13.599 | 1.00th=[ 145], 5.00th=[ 169], 10.00th=[ 188], 20.00th=[ 202], 00:46:13.599 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 225], 00:46:13.599 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 265], 00:46:13.599 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 334], 99.95th=[ 351], 00:46:13.599 | 99.99th=[ 545] 00:46:13.599 bw ( KiB/s): min= 8192, max= 8192, per=18.20%, avg=8192.00, stdev= 0.00, samples=1 00:46:13.599 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:46:13.599 lat (usec) : 250=53.49%, 500=46.46%, 750=0.06% 00:46:13.599 cpu : usr=1.20%, sys=6.60%, ctx=3645, majf=0, minf=15 00:46:13.599 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:13.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:13.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:13.599 issued rwts: total=1579,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:13.599 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:13.599 job3: (groupid=0, jobs=1): err= 0: pid=70679: Mon Dec 9 09:57:20 2024 00:46:13.599 read: IOPS=1727, BW=6909KiB/s (7075kB/s)(6916KiB/1001msec) 00:46:13.599 slat (usec): min=8, max=109, avg=18.62, stdev= 8.11 00:46:13.599 clat (usec): min=156, max=1946, avg=270.15, stdev=75.08 00:46:13.599 lat (usec): min=167, max=1962, avg=288.76, stdev=75.80 00:46:13.599 clat percentiles (usec): 00:46:13.599 | 1.00th=[ 172], 5.00th=[ 194], 10.00th=[ 241], 20.00th=[ 251], 00:46:13.599 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:46:13.599 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 314], 00:46:13.599 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 1844], 99.95th=[ 1942], 00:46:13.599 | 99.99th=[ 1942] 00:46:13.599 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:46:13.599 slat (usec): min=12, max=106, avg=25.26, stdev= 8.38 00:46:13.599 clat (usec): min=107, max=356, avg=215.41, stdev=32.21 00:46:13.599 lat (usec): min=129, max=382, avg=240.66, stdev=30.88 00:46:13.599 clat percentiles (usec): 00:46:13.599 | 1.00th=[ 122], 5.00th=[ 143], 10.00th=[ 182], 20.00th=[ 200], 00:46:13.599 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 223], 00:46:13.599 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 262], 00:46:13.599 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 326], 99.95th=[ 326], 00:46:13.599 | 99.99th=[ 359] 00:46:13.599 bw ( KiB/s): min= 8192, max= 8192, per=18.20%, avg=8192.00, stdev= 0.00, samples=1 00:46:13.599 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:46:13.599 lat (usec) : 250=58.75%, 500=41.09%, 750=0.05%, 1000=0.03% 00:46:13.599 lat (msec) : 2=0.08% 00:46:13.599 cpu : usr=1.40%, sys=6.70%, ctx=3777, majf=0, minf=11 00:46:13.599 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:13.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:13.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:13.599 issued rwts: total=1729,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:13.599 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:13.599 00:46:13.599 Run status group 0 (all jobs): 00:46:13.599 READ: bw=38.1MiB/s (40.0MB/s), 6310KiB/s-13.2MiB/s (6461kB/s-13.8MB/s), io=38.1MiB (40.0MB), run=1001-1001msec 00:46:13.599 WRITE: bw=44.0MiB/s (46.1MB/s), 8184KiB/s-14.0MiB/s (8380kB/s-14.7MB/s), io=44.0MiB (46.1MB), run=1001-1001msec 00:46:13.599 00:46:13.599 Disk stats (read/write): 00:46:13.599 nvme0n1: ios=3071/3072, merge=0/0, ticks=465/371, in_queue=836, util=89.37% 00:46:13.599 nvme0n2: ios=2823/3072, merge=0/0, ticks=503/380, in_queue=883, util=94.15% 00:46:13.600 nvme0n3: ios=1591/1606, merge=0/0, ticks=528/369, in_queue=897, util=94.69% 00:46:13.600 nvme0n4: ios=1536/1783, merge=0/0, ticks=423/401, in_queue=824, util=89.86% 00:46:13.600 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:46:13.600 [global] 00:46:13.600 thread=1 00:46:13.600 invalidate=1 00:46:13.600 rw=randwrite 00:46:13.600 time_based=1 00:46:13.600 runtime=1 00:46:13.600 ioengine=libaio 00:46:13.600 direct=1 00:46:13.600 bs=4096 00:46:13.600 iodepth=1 00:46:13.600 norandommap=0 00:46:13.600 numjobs=1 00:46:13.600 00:46:13.600 verify_dump=1 00:46:13.600 verify_backlog=512 00:46:13.600 verify_state_save=0 00:46:13.600 do_verify=1 00:46:13.600 verify=crc32c-intel 00:46:13.600 [job0] 00:46:13.600 filename=/dev/nvme0n1 00:46:13.600 [job1] 00:46:13.600 filename=/dev/nvme0n2 00:46:13.600 [job2] 00:46:13.600 filename=/dev/nvme0n3 00:46:13.600 [job3] 00:46:13.600 filename=/dev/nvme0n4 00:46:13.600 Could not set queue depth (nvme0n1) 00:46:13.600 Could not set queue depth (nvme0n2) 00:46:13.600 Could not set queue depth (nvme0n3) 00:46:13.600 Could not set queue depth (nvme0n4) 00:46:13.859 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:13.859 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:13.859 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:13.859 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:13.859 fio-3.35 00:46:13.859 Starting 4 threads 00:46:14.796 00:46:14.796 job0: (groupid=0, jobs=1): err= 0: pid=70733: Mon Dec 9 09:57:21 2024 00:46:14.796 read: IOPS=1863, BW=7453KiB/s (7631kB/s)(7460KiB/1001msec) 00:46:14.796 slat (nsec): min=6885, max=42639, avg=10264.85, stdev=4088.93 00:46:14.796 clat (usec): min=124, max=946, avg=267.36, stdev=46.91 00:46:14.796 lat (usec): min=131, max=962, avg=277.63, stdev=47.67 00:46:14.796 clat percentiles (usec): 00:46:14.796 | 1.00th=[ 135], 5.00th=[ 151], 10.00th=[ 245], 20.00th=[ 255], 00:46:14.796 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:46:14.796 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 310], 00:46:14.796 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 693], 99.95th=[ 947], 00:46:14.796 | 99.99th=[ 947] 00:46:14.796 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:46:14.796 slat (usec): min=11, max=151, avg=20.00, stdev= 7.00 00:46:14.796 clat (usec): min=99, max=304, avg=212.85, stdev=19.13 00:46:14.796 lat (usec): min=111, max=442, avg=232.85, stdev=18.77 00:46:14.796 clat percentiles (usec): 00:46:14.796 | 1.00th=[ 157], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:46:14.796 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 212], 60.00th=[ 217], 00:46:14.796 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 243], 00:46:14.796 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 293], 99.95th=[ 297], 00:46:14.796 | 99.99th=[ 306] 00:46:14.796 bw ( KiB/s): min= 8175, max= 8175, per=18.51%, avg=8175.00, stdev= 0.00, samples=1 00:46:14.796 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:46:14.796 lat (usec) : 100=0.03%, 250=57.78%, 500=42.12%, 750=0.05%, 1000=0.03% 00:46:14.796 cpu : usr=1.00%, sys=4.80%, ctx=3915, majf=0, minf=11 00:46:14.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:14.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.796 issued rwts: total=1865,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:14.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:14.796 job1: (groupid=0, jobs=1): err= 0: pid=70734: Mon Dec 9 09:57:21 2024 00:46:14.796 read: IOPS=3379, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:46:14.796 slat (nsec): min=7720, max=87904, avg=9983.32, stdev=4673.92 00:46:14.796 clat (usec): min=123, max=367, avg=145.71, stdev=11.22 00:46:14.796 lat (usec): min=131, max=376, avg=155.69, stdev=13.09 00:46:14.796 clat percentiles (usec): 00:46:14.796 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:46:14.796 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:46:14.796 | 70.00th=[ 151], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:46:14.796 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 206], 99.95th=[ 265], 00:46:14.796 | 99.99th=[ 367] 00:46:14.796 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:46:14.796 slat (usec): min=10, max=150, avg=14.42, stdev= 6.47 00:46:14.796 clat (usec): min=87, max=1491, avg=115.36, stdev=25.96 00:46:14.796 lat (usec): min=97, max=1504, avg=129.78, stdev=27.64 00:46:14.796 clat percentiles (usec): 00:46:14.796 | 1.00th=[ 95], 5.00th=[ 101], 10.00th=[ 103], 20.00th=[ 108], 00:46:14.796 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 117], 00:46:14.796 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 129], 95.00th=[ 135], 00:46:14.796 | 99.00th=[ 145], 99.50th=[ 149], 99.90th=[ 253], 99.95th=[ 420], 00:46:14.796 | 99.99th=[ 1500] 00:46:14.796 bw ( KiB/s): min=15800, max=15800, per=35.77%, avg=15800.00, stdev= 0.00, samples=1 00:46:14.796 iops : min= 3950, max= 3950, avg=3950.00, stdev= 0.00, samples=1 00:46:14.796 lat (usec) : 100=2.35%, 250=97.56%, 500=0.07% 00:46:14.796 lat (msec) : 2=0.01% 00:46:14.796 cpu : usr=1.30%, sys=6.90%, ctx=6967, majf=0, minf=8 00:46:14.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:14.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.796 issued rwts: total=3383,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:14.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:14.796 job2: (groupid=0, jobs=1): err= 0: pid=70735: Mon Dec 9 09:57:21 2024 00:46:14.796 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:46:14.796 slat (nsec): min=7097, max=33707, avg=8755.62, stdev=1324.76 00:46:14.796 clat (usec): min=125, max=317, avg=158.13, stdev=10.86 00:46:14.796 lat (usec): min=133, max=326, avg=166.89, stdev=11.08 00:46:14.796 clat percentiles (usec): 00:46:14.796 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 149], 00:46:14.796 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:46:14.796 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 172], 95.00th=[ 178], 00:46:14.796 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 206], 99.95th=[ 217], 00:46:14.796 | 99.99th=[ 318] 00:46:14.796 write: IOPS=3370, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec); 0 zone resets 00:46:14.796 slat (usec): min=10, max=117, avg=14.07, stdev= 6.19 00:46:14.796 clat (usec): min=98, max=2344, avg=128.33, stdev=41.11 00:46:14.796 lat (usec): min=113, max=2382, avg=142.40, stdev=42.50 00:46:14.796 clat percentiles (usec): 00:46:14.796 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 116], 20.00th=[ 119], 00:46:14.796 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 129], 00:46:14.796 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 147], 00:46:14.796 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 297], 99.95th=[ 619], 00:46:14.796 | 99.99th=[ 2343] 00:46:14.796 bw ( KiB/s): min=13360, max=13360, per=30.25%, avg=13360.00, stdev= 0.00, samples=1 00:46:14.796 iops : min= 3340, max= 3340, avg=3340.00, stdev= 0.00, samples=1 00:46:14.796 lat (usec) : 100=0.02%, 250=99.89%, 500=0.06%, 750=0.02% 00:46:14.796 lat (msec) : 4=0.02% 00:46:14.796 cpu : usr=1.40%, sys=5.70%, ctx=6449, majf=0, minf=11 00:46:14.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:14.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.796 issued rwts: total=3072,3374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:14.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:14.796 job3: (groupid=0, jobs=1): err= 0: pid=70736: Mon Dec 9 09:57:21 2024 00:46:14.796 read: IOPS=1801, BW=7205KiB/s (7378kB/s)(7212KiB/1001msec) 00:46:14.796 slat (nsec): min=7090, max=50663, avg=15447.79, stdev=5826.17 00:46:14.796 clat (usec): min=156, max=6688, avg=272.23, stdev=216.42 00:46:14.796 lat (usec): min=172, max=6699, avg=287.68, stdev=216.62 00:46:14.796 clat percentiles (usec): 00:46:14.796 | 1.00th=[ 217], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 247], 00:46:14.796 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:46:14.796 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:46:14.796 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 4555], 99.95th=[ 6718], 00:46:14.796 | 99.99th=[ 6718] 00:46:14.796 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:46:14.796 slat (usec): min=11, max=116, avg=22.68, stdev= 9.58 00:46:14.796 clat (usec): min=116, max=2534, avg=209.04, stdev=57.11 00:46:14.796 lat (usec): min=129, max=2564, avg=231.72, stdev=56.71 00:46:14.796 clat percentiles (usec): 00:46:14.796 | 1.00th=[ 129], 5.00th=[ 167], 10.00th=[ 182], 20.00th=[ 194], 00:46:14.797 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:46:14.797 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 241], 00:46:14.797 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 420], 99.95th=[ 529], 00:46:14.797 | 99.99th=[ 2540] 00:46:14.797 bw ( KiB/s): min= 8192, max= 8192, per=18.55%, avg=8192.00, stdev= 0.00, samples=1 00:46:14.797 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:46:14.797 lat (usec) : 250=64.19%, 500=35.60%, 750=0.05% 00:46:14.797 lat (msec) : 2=0.03%, 4=0.08%, 10=0.05% 00:46:14.797 cpu : usr=1.60%, sys=5.60%, ctx=3852, majf=0, minf=20 00:46:14.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:14.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.797 issued rwts: total=1803,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:14.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:14.797 00:46:14.797 Run status group 0 (all jobs): 00:46:14.797 READ: bw=39.5MiB/s (41.4MB/s), 7205KiB/s-13.2MiB/s (7378kB/s-13.8MB/s), io=39.5MiB (41.5MB), run=1001-1001msec 00:46:14.797 WRITE: bw=43.1MiB/s (45.2MB/s), 8184KiB/s-14.0MiB/s (8380kB/s-14.7MB/s), io=43.2MiB (45.3MB), run=1001-1001msec 00:46:14.797 00:46:14.797 Disk stats (read/write): 00:46:14.797 nvme0n1: ios=1586/1964, merge=0/0, ticks=428/438, in_queue=866, util=89.58% 00:46:14.797 nvme0n2: ios=3086/3072, merge=0/0, ticks=480/375, in_queue=855, util=90.53% 00:46:14.797 nvme0n3: ios=2658/3072, merge=0/0, ticks=450/411, in_queue=861, util=90.84% 00:46:14.797 nvme0n4: ios=1553/1842, merge=0/0, ticks=432/397, in_queue=829, util=89.27% 00:46:14.797 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:46:15.056 [global] 00:46:15.056 thread=1 00:46:15.056 invalidate=1 00:46:15.056 rw=write 00:46:15.056 time_based=1 00:46:15.056 runtime=1 00:46:15.056 ioengine=libaio 00:46:15.056 direct=1 00:46:15.056 bs=4096 00:46:15.056 iodepth=128 00:46:15.056 norandommap=0 00:46:15.056 numjobs=1 00:46:15.056 00:46:15.056 verify_dump=1 00:46:15.056 verify_backlog=512 00:46:15.056 verify_state_save=0 00:46:15.056 do_verify=1 00:46:15.056 verify=crc32c-intel 00:46:15.056 [job0] 00:46:15.056 filename=/dev/nvme0n1 00:46:15.056 [job1] 00:46:15.056 filename=/dev/nvme0n2 00:46:15.056 [job2] 00:46:15.056 filename=/dev/nvme0n3 00:46:15.056 [job3] 00:46:15.056 filename=/dev/nvme0n4 00:46:15.056 Could not set queue depth (nvme0n1) 00:46:15.056 Could not set queue depth (nvme0n2) 00:46:15.056 Could not set queue depth (nvme0n3) 00:46:15.056 Could not set queue depth (nvme0n4) 00:46:15.056 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:15.056 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:15.056 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:15.056 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:15.056 fio-3.35 00:46:15.056 Starting 4 threads 00:46:16.438 00:46:16.438 job0: (groupid=0, jobs=1): err= 0: pid=70791: Mon Dec 9 09:57:23 2024 00:46:16.438 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:46:16.438 slat (usec): min=12, max=3053, avg=92.46, stdev=340.38 00:46:16.438 clat (usec): min=9561, max=16502, avg=12577.99, stdev=1167.57 00:46:16.438 lat (usec): min=9601, max=16606, avg=12670.45, stdev=1153.91 00:46:16.438 clat percentiles (usec): 00:46:16.438 | 1.00th=[10159], 5.00th=[10683], 10.00th=[11207], 20.00th=[11600], 00:46:16.438 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[12780], 00:46:16.438 | 70.00th=[13042], 80.00th=[13698], 90.00th=[14222], 95.00th=[14615], 00:46:16.438 | 99.00th=[15401], 99.50th=[15533], 99.90th=[15926], 99.95th=[16450], 00:46:16.438 | 99.99th=[16450] 00:46:16.438 write: IOPS=5196, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1003msec); 0 zone resets 00:46:16.438 slat (usec): min=17, max=4316, avg=88.79, stdev=291.93 00:46:16.438 clat (usec): min=426, max=15500, avg=11928.30, stdev=1347.86 00:46:16.438 lat (usec): min=4742, max=15531, avg=12017.09, stdev=1349.21 00:46:16.438 clat percentiles (usec): 00:46:16.438 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:46:16.438 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:46:16.438 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13698], 95.00th=[14484], 00:46:16.438 | 99.00th=[15139], 99.50th=[15139], 99.90th=[15533], 99.95th=[15533], 00:46:16.438 | 99.99th=[15533] 00:46:16.438 bw ( KiB/s): min=20480, max=20521, per=31.76%, avg=20500.50, stdev=28.99, samples=2 00:46:16.438 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:46:16.438 lat (usec) : 500=0.01% 00:46:16.438 lat (msec) : 10=2.16%, 20=97.83% 00:46:16.438 cpu : usr=6.29%, sys=25.35%, ctx=851, majf=0, minf=1 00:46:16.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:46:16.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:16.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:16.438 issued rwts: total=5120,5212,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:16.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:16.438 job1: (groupid=0, jobs=1): err= 0: pid=70792: Mon Dec 9 09:57:23 2024 00:46:16.438 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:46:16.438 slat (usec): min=4, max=11597, avg=165.38, stdev=700.53 00:46:16.438 clat (usec): min=10689, max=36668, avg=21471.63, stdev=6458.81 00:46:16.438 lat (usec): min=11791, max=37214, avg=21637.01, stdev=6509.60 00:46:16.438 clat percentiles (usec): 00:46:16.438 | 1.00th=[11994], 5.00th=[12911], 10.00th=[13698], 20.00th=[14091], 00:46:16.438 | 30.00th=[14615], 40.00th=[20055], 50.00th=[23200], 60.00th=[24249], 00:46:16.438 | 70.00th=[25560], 80.00th=[27132], 90.00th=[30278], 95.00th=[31589], 00:46:16.438 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35390], 99.95th=[35914], 00:46:16.438 | 99.99th=[36439] 00:46:16.438 write: IOPS=3310, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1005msec); 0 zone resets 00:46:16.438 slat (usec): min=8, max=6606, avg=136.94, stdev=494.17 00:46:16.438 clat (usec): min=4215, max=35033, avg=18285.58, stdev=5717.07 00:46:16.438 lat (usec): min=4247, max=35799, avg=18422.52, stdev=5754.23 00:46:16.438 clat percentiles (usec): 00:46:16.438 | 1.00th=[ 7242], 5.00th=[11731], 10.00th=[12256], 20.00th=[12780], 00:46:16.438 | 30.00th=[13829], 40.00th=[14746], 50.00th=[17695], 60.00th=[20317], 00:46:16.438 | 70.00th=[22414], 80.00th=[23725], 90.00th=[25560], 95.00th=[28705], 00:46:16.438 | 99.00th=[31589], 99.50th=[32375], 99.90th=[33817], 99.95th=[33817], 00:46:16.438 | 99.99th=[34866] 00:46:16.438 bw ( KiB/s): min= 9216, max=16384, per=19.83%, avg=12800.00, stdev=5068.54, samples=2 00:46:16.438 iops : min= 2304, max= 4096, avg=3200.00, stdev=1267.14, samples=2 00:46:16.438 lat (msec) : 10=0.63%, 20=48.70%, 50=50.68% 00:46:16.438 cpu : usr=4.78%, sys=13.94%, ctx=793, majf=0, minf=3 00:46:16.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:46:16.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:16.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:16.438 issued rwts: total=3072,3327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:16.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:16.438 job2: (groupid=0, jobs=1): err= 0: pid=70793: Mon Dec 9 09:57:23 2024 00:46:16.438 read: IOPS=4578, BW=17.9MiB/s (18.8MB/s)(17.9MiB/1003msec) 00:46:16.438 slat (usec): min=6, max=4406, avg=105.67, stdev=502.40 00:46:16.438 clat (usec): min=964, max=20295, avg=13802.84, stdev=1954.56 00:46:16.438 lat (usec): min=3685, max=20327, avg=13908.51, stdev=1965.30 00:46:16.438 clat percentiles (usec): 00:46:16.438 | 1.00th=[ 6390], 5.00th=[10814], 10.00th=[11338], 20.00th=[12387], 00:46:16.438 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13829], 60.00th=[14222], 00:46:16.438 | 70.00th=[14877], 80.00th=[15664], 90.00th=[16057], 95.00th=[16712], 00:46:16.438 | 99.00th=[17171], 99.50th=[17433], 99.90th=[20055], 99.95th=[20055], 00:46:16.438 | 99.99th=[20317] 00:46:16.438 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:46:16.438 slat (usec): min=7, max=4012, avg=102.09, stdev=429.02 00:46:16.438 clat (usec): min=9127, max=19719, avg=13698.83, stdev=1575.28 00:46:16.438 lat (usec): min=9162, max=19751, avg=13800.91, stdev=1566.20 00:46:16.438 clat percentiles (usec): 00:46:16.438 | 1.00th=[10421], 5.00th=[10945], 10.00th=[11731], 20.00th=[12649], 00:46:16.438 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:46:16.438 | 70.00th=[14222], 80.00th=[15270], 90.00th=[16057], 95.00th=[16450], 00:46:16.438 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17957], 00:46:16.438 | 99.99th=[19792] 00:46:16.438 bw ( KiB/s): min=18260, max=18640, per=28.58%, avg=18450.00, stdev=268.70, samples=2 00:46:16.438 iops : min= 4565, max= 4660, avg=4612.50, stdev=67.18, samples=2 00:46:16.438 lat (usec) : 1000=0.01% 00:46:16.438 lat (msec) : 4=0.07%, 10=0.82%, 20=99.08%, 50=0.03% 00:46:16.438 cpu : usr=3.79%, sys=18.66%, ctx=492, majf=0, minf=4 00:46:16.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:46:16.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:16.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:16.438 issued rwts: total=4592,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:16.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:16.438 job3: (groupid=0, jobs=1): err= 0: pid=70794: Mon Dec 9 09:57:23 2024 00:46:16.438 read: IOPS=2973, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1005msec) 00:46:16.438 slat (usec): min=4, max=10109, avg=176.43, stdev=763.28 00:46:16.438 clat (usec): min=3061, max=37475, avg=22536.98, stdev=6301.93 00:46:16.438 lat (usec): min=4618, max=37500, avg=22713.41, stdev=6343.19 00:46:16.438 clat percentiles (usec): 00:46:16.438 | 1.00th=[ 9503], 5.00th=[13829], 10.00th=[14484], 20.00th=[15533], 00:46:16.438 | 30.00th=[17171], 40.00th=[21365], 50.00th=[24249], 60.00th=[26084], 00:46:16.438 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29754], 95.00th=[30540], 00:46:16.438 | 99.00th=[32637], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:46:16.438 | 99.99th=[37487] 00:46:16.438 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:46:16.438 slat (usec): min=7, max=7156, avg=141.27, stdev=532.33 00:46:16.438 clat (usec): min=9372, max=32362, avg=19379.49, stdev=4805.13 00:46:16.438 lat (usec): min=9427, max=32424, avg=19520.76, stdev=4844.32 00:46:16.438 clat percentiles (usec): 00:46:16.438 | 1.00th=[10814], 5.00th=[13435], 10.00th=[14222], 20.00th=[14746], 00:46:16.438 | 30.00th=[15139], 40.00th=[15664], 50.00th=[19530], 60.00th=[21627], 00:46:16.438 | 70.00th=[23200], 80.00th=[24511], 90.00th=[25822], 95.00th=[26346], 00:46:16.438 | 99.00th=[28705], 99.50th=[29230], 99.90th=[31589], 99.95th=[31851], 00:46:16.438 | 99.99th=[32375] 00:46:16.438 bw ( KiB/s): min= 9480, max=15096, per=19.04%, avg=12288.00, stdev=3971.11, samples=2 00:46:16.438 iops : min= 2370, max= 3774, avg=3072.00, stdev=992.78, samples=2 00:46:16.438 lat (msec) : 4=0.02%, 10=0.71%, 20=43.43%, 50=55.84% 00:46:16.438 cpu : usr=3.98%, sys=15.34%, ctx=739, majf=0, minf=3 00:46:16.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:46:16.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:16.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:16.438 issued rwts: total=2988,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:16.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:16.438 00:46:16.438 Run status group 0 (all jobs): 00:46:16.438 READ: bw=61.3MiB/s (64.3MB/s), 11.6MiB/s-19.9MiB/s (12.2MB/s-20.9MB/s), io=61.6MiB (64.6MB), run=1003-1005msec 00:46:16.439 WRITE: bw=63.0MiB/s (66.1MB/s), 11.9MiB/s-20.3MiB/s (12.5MB/s-21.3MB/s), io=63.4MiB (66.4MB), run=1003-1005msec 00:46:16.439 00:46:16.439 Disk stats (read/write): 00:46:16.439 nvme0n1: ios=4436/4608, merge=0/0, ticks=12184/10375, in_queue=22559, util=90.17% 00:46:16.439 nvme0n2: ios=2679/3072, merge=0/0, ticks=15412/14008, in_queue=29420, util=89.70% 00:46:16.439 nvme0n3: ios=3944/4096, merge=0/0, ticks=16364/15485, in_queue=31849, util=90.41% 00:46:16.439 nvme0n4: ios=2586/2850, merge=0/0, ticks=20041/18053, in_queue=38094, util=89.05% 00:46:16.439 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:46:16.439 [global] 00:46:16.439 thread=1 00:46:16.439 invalidate=1 00:46:16.439 rw=randwrite 00:46:16.439 time_based=1 00:46:16.439 runtime=1 00:46:16.439 ioengine=libaio 00:46:16.439 direct=1 00:46:16.439 bs=4096 00:46:16.439 iodepth=128 00:46:16.439 norandommap=0 00:46:16.439 numjobs=1 00:46:16.439 00:46:16.439 verify_dump=1 00:46:16.439 verify_backlog=512 00:46:16.439 verify_state_save=0 00:46:16.439 do_verify=1 00:46:16.439 verify=crc32c-intel 00:46:16.439 [job0] 00:46:16.439 filename=/dev/nvme0n1 00:46:16.439 [job1] 00:46:16.439 filename=/dev/nvme0n2 00:46:16.439 [job2] 00:46:16.439 filename=/dev/nvme0n3 00:46:16.439 [job3] 00:46:16.439 filename=/dev/nvme0n4 00:46:16.439 Could not set queue depth (nvme0n1) 00:46:16.439 Could not set queue depth (nvme0n2) 00:46:16.439 Could not set queue depth (nvme0n3) 00:46:16.439 Could not set queue depth (nvme0n4) 00:46:16.439 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:16.439 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:16.439 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:16.439 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:16.439 fio-3.35 00:46:16.439 Starting 4 threads 00:46:17.820 00:46:17.820 job0: (groupid=0, jobs=1): err= 0: pid=70853: Mon Dec 9 09:57:24 2024 00:46:17.820 read: IOPS=5171, BW=20.2MiB/s (21.2MB/s)(20.2MiB/1002msec) 00:46:17.820 slat (usec): min=7, max=2602, avg=88.71, stdev=354.24 00:46:17.820 clat (usec): min=300, max=14675, avg=11836.24, stdev=1228.11 00:46:17.820 lat (usec): min=2496, max=14687, avg=11924.95, stdev=1192.24 00:46:17.820 clat percentiles (usec): 00:46:17.820 | 1.00th=[ 6063], 5.00th=[10028], 10.00th=[10290], 20.00th=[11207], 00:46:17.820 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:46:17.820 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12780], 95.00th=[13042], 00:46:17.820 | 99.00th=[13566], 99.50th=[13698], 99.90th=[14222], 99.95th=[14615], 00:46:17.820 | 99.99th=[14615] 00:46:17.820 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:46:17.820 slat (usec): min=18, max=5524, avg=86.11, stdev=285.01 00:46:17.820 clat (usec): min=6284, max=16931, avg=11594.88, stdev=1074.39 00:46:17.820 lat (usec): min=6314, max=16962, avg=11681.00, stdev=1083.48 00:46:17.820 clat percentiles (usec): 00:46:17.820 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10552], 20.00th=[10814], 00:46:17.820 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:46:17.820 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13042], 95.00th=[13304], 00:46:17.820 | 99.00th=[14877], 99.50th=[16188], 99.90th=[16909], 99.95th=[16909], 00:46:17.820 | 99.99th=[16909] 00:46:17.820 bw ( KiB/s): min=21064, max=23417, per=34.96%, avg=22240.50, stdev=1663.82, samples=2 00:46:17.820 iops : min= 5266, max= 5854, avg=5560.00, stdev=415.78, samples=2 00:46:17.820 lat (usec) : 500=0.01% 00:46:17.820 lat (msec) : 4=0.30%, 10=3.13%, 20=96.57% 00:46:17.820 cpu : usr=5.59%, sys=21.88%, ctx=786, majf=0, minf=13 00:46:17.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:46:17.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:17.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:17.820 issued rwts: total=5182,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:17.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:17.820 job1: (groupid=0, jobs=1): err= 0: pid=70854: Mon Dec 9 09:57:24 2024 00:46:17.820 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:46:17.820 slat (usec): min=4, max=7452, avg=194.62, stdev=738.78 00:46:17.820 clat (usec): min=17528, max=34601, avg=24305.93, stdev=2472.76 00:46:17.820 lat (usec): min=18004, max=34629, avg=24500.55, stdev=2517.38 00:46:17.820 clat percentiles (usec): 00:46:17.820 | 1.00th=[18744], 5.00th=[20055], 10.00th=[21103], 20.00th=[22414], 00:46:17.820 | 30.00th=[23200], 40.00th=[23725], 50.00th=[24249], 60.00th=[24773], 00:46:17.820 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27132], 95.00th=[28181], 00:46:17.820 | 99.00th=[31065], 99.50th=[32375], 99.90th=[32375], 99.95th=[32375], 00:46:17.820 | 99.99th=[34341] 00:46:17.820 write: IOPS=2647, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1007msec); 0 zone resets 00:46:17.820 slat (usec): min=7, max=7523, avg=176.67, stdev=550.82 00:46:17.820 clat (usec): min=5445, max=34293, avg=24240.03, stdev=3097.64 00:46:17.820 lat (usec): min=7011, max=34319, avg=24416.71, stdev=3142.88 00:46:17.820 clat percentiles (usec): 00:46:17.820 | 1.00th=[ 9372], 5.00th=[20055], 10.00th=[21627], 20.00th=[22676], 00:46:17.820 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24511], 60.00th=[24773], 00:46:17.820 | 70.00th=[25035], 80.00th=[25560], 90.00th=[27132], 95.00th=[29230], 00:46:17.820 | 99.00th=[31851], 99.50th=[32637], 99.90th=[34341], 99.95th=[34341], 00:46:17.820 | 99.99th=[34341] 00:46:17.820 bw ( KiB/s): min= 8862, max=11600, per=16.08%, avg=10231.00, stdev=1936.06, samples=2 00:46:17.820 iops : min= 2215, max= 2900, avg=2557.50, stdev=484.37, samples=2 00:46:17.820 lat (msec) : 10=0.54%, 20=4.46%, 50=95.01% 00:46:17.820 cpu : usr=3.68%, sys=10.24%, ctx=1051, majf=0, minf=7 00:46:17.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:46:17.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:17.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:17.820 issued rwts: total=2560,2666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:17.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:17.821 job2: (groupid=0, jobs=1): err= 0: pid=70855: Mon Dec 9 09:57:24 2024 00:46:17.821 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:46:17.821 slat (usec): min=6, max=3047, avg=98.87, stdev=413.52 00:46:17.821 clat (usec): min=10002, max=16035, avg=13294.58, stdev=931.60 00:46:17.821 lat (usec): min=10360, max=17970, avg=13393.45, stdev=870.30 00:46:17.821 clat percentiles (usec): 00:46:17.821 | 1.00th=[10814], 5.00th=[11469], 10.00th=[11994], 20.00th=[12780], 00:46:17.821 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:46:17.821 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14484], 95.00th=[14615], 00:46:17.821 | 99.00th=[15008], 99.50th=[15139], 99.90th=[16057], 99.95th=[16057], 00:46:17.821 | 99.99th=[16057] 00:46:17.821 write: IOPS=5024, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1003msec); 0 zone resets 00:46:17.821 slat (usec): min=15, max=5697, avg=98.25, stdev=367.26 00:46:17.821 clat (usec): min=348, max=18238, avg=12969.77, stdev=1589.15 00:46:17.821 lat (usec): min=2771, max=18270, avg=13068.02, stdev=1596.07 00:46:17.821 clat percentiles (usec): 00:46:17.821 | 1.00th=[ 7242], 5.00th=[11338], 10.00th=[11600], 20.00th=[12125], 00:46:17.821 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:46:17.821 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14746], 95.00th=[15139], 00:46:17.821 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:46:17.821 | 99.99th=[18220] 00:46:17.821 bw ( KiB/s): min=18778, max=20521, per=30.89%, avg=19649.50, stdev=1232.49, samples=2 00:46:17.821 iops : min= 4694, max= 5130, avg=4912.00, stdev=308.30, samples=2 00:46:17.821 lat (usec) : 500=0.01% 00:46:17.821 lat (msec) : 4=0.33%, 10=0.56%, 20=99.10% 00:46:17.821 cpu : usr=5.49%, sys=19.66%, ctx=625, majf=0, minf=13 00:46:17.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:46:17.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:17.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:17.821 issued rwts: total=4608,5040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:17.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:17.821 job3: (groupid=0, jobs=1): err= 0: pid=70856: Mon Dec 9 09:57:24 2024 00:46:17.821 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:46:17.821 slat (usec): min=4, max=7709, avg=192.55, stdev=727.45 00:46:17.821 clat (usec): min=17379, max=33817, avg=24120.66, stdev=2580.24 00:46:17.821 lat (usec): min=17399, max=33838, avg=24313.21, stdev=2627.94 00:46:17.821 clat percentiles (usec): 00:46:17.821 | 1.00th=[18220], 5.00th=[20055], 10.00th=[20841], 20.00th=[22152], 00:46:17.821 | 30.00th=[22938], 40.00th=[23725], 50.00th=[23987], 60.00th=[24511], 00:46:17.821 | 70.00th=[25035], 80.00th=[25560], 90.00th=[27132], 95.00th=[28967], 00:46:17.821 | 99.00th=[31589], 99.50th=[33162], 99.90th=[33817], 99.95th=[33817], 00:46:17.821 | 99.99th=[33817] 00:46:17.821 write: IOPS=2656, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1007msec); 0 zone resets 00:46:17.821 slat (usec): min=7, max=7145, avg=178.30, stdev=516.48 00:46:17.821 clat (usec): min=5625, max=34734, avg=24292.85, stdev=3270.74 00:46:17.821 lat (usec): min=6248, max=34768, avg=24471.15, stdev=3307.81 00:46:17.821 clat percentiles (usec): 00:46:17.821 | 1.00th=[11207], 5.00th=[19268], 10.00th=[21365], 20.00th=[22676], 00:46:17.821 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24511], 60.00th=[24773], 00:46:17.821 | 70.00th=[25035], 80.00th=[25560], 90.00th=[27919], 95.00th=[30016], 00:46:17.821 | 99.00th=[32900], 99.50th=[33817], 99.90th=[34341], 99.95th=[34866], 00:46:17.821 | 99.99th=[34866] 00:46:17.821 bw ( KiB/s): min= 8702, max=11783, per=16.10%, avg=10242.50, stdev=2178.60, samples=2 00:46:17.821 iops : min= 2175, max= 2945, avg=2560.00, stdev=544.47, samples=2 00:46:17.821 lat (msec) : 10=0.27%, 20=5.00%, 50=94.73% 00:46:17.821 cpu : usr=3.88%, sys=10.34%, ctx=1068, majf=0, minf=15 00:46:17.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:46:17.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:17.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:17.821 issued rwts: total=2560,2675,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:17.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:17.821 00:46:17.821 Run status group 0 (all jobs): 00:46:17.821 READ: bw=57.8MiB/s (60.6MB/s), 9.93MiB/s-20.2MiB/s (10.4MB/s-21.2MB/s), io=58.2MiB (61.1MB), run=1002-1007msec 00:46:17.821 WRITE: bw=62.1MiB/s (65.1MB/s), 10.3MiB/s-22.0MiB/s (10.8MB/s-23.0MB/s), io=62.6MiB (65.6MB), run=1002-1007msec 00:46:17.821 00:46:17.821 Disk stats (read/write): 00:46:17.821 nvme0n1: ios=4658/4807, merge=0/0, ticks=12595/11440, in_queue=24035, util=90.07% 00:46:17.821 nvme0n2: ios=2097/2515, merge=0/0, ticks=15413/18689, in_queue=34102, util=90.62% 00:46:17.821 nvme0n3: ios=4152/4333, merge=0/0, ticks=12603/11345, in_queue=23948, util=91.45% 00:46:17.821 nvme0n4: ios=2104/2484, merge=0/0, ticks=15799/18147, in_queue=33946, util=91.72% 00:46:17.821 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:46:17.821 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70876 00:46:17.821 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:46:17.821 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:46:17.821 [global] 00:46:17.821 thread=1 00:46:17.821 invalidate=1 00:46:17.821 rw=read 00:46:17.821 time_based=1 00:46:17.821 runtime=10 00:46:17.821 ioengine=libaio 00:46:17.821 direct=1 00:46:17.821 bs=4096 00:46:17.821 iodepth=1 00:46:17.821 norandommap=1 00:46:17.821 numjobs=1 00:46:17.821 00:46:17.821 [job0] 00:46:17.821 filename=/dev/nvme0n1 00:46:17.821 [job1] 00:46:17.821 filename=/dev/nvme0n2 00:46:17.821 [job2] 00:46:17.821 filename=/dev/nvme0n3 00:46:17.821 [job3] 00:46:17.821 filename=/dev/nvme0n4 00:46:17.821 Could not set queue depth (nvme0n1) 00:46:17.821 Could not set queue depth (nvme0n2) 00:46:17.821 Could not set queue depth (nvme0n3) 00:46:17.821 Could not set queue depth (nvme0n4) 00:46:18.081 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:18.081 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:18.081 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:18.081 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:18.081 fio-3.35 00:46:18.081 Starting 4 threads 00:46:20.646 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:46:20.905 fio: pid=70923, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:46:20.905 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=69746688, buflen=4096 00:46:20.905 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:46:21.165 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45350912, buflen=4096 00:46:21.165 fio: pid=70922, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:46:21.165 09:57:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:21.165 09:57:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:46:21.423 fio: pid=70920, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:46:21.423 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=51118080, buflen=4096 00:46:21.423 09:57:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:21.423 09:57:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:46:21.683 fio: pid=70921, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:46:21.683 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=29675520, buflen=4096 00:46:21.683 00:46:21.683 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70920: Mon Dec 9 09:57:28 2024 00:46:21.683 read: IOPS=3689, BW=14.4MiB/s (15.1MB/s)(48.8MiB/3383msec) 00:46:21.683 slat (usec): min=6, max=14824, avg=13.08, stdev=209.00 00:46:21.683 clat (usec): min=111, max=2485, avg=257.10, stdev=57.34 00:46:21.683 lat (usec): min=129, max=14991, avg=270.18, stdev=215.53 00:46:21.683 clat percentiles (usec): 00:46:21.683 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 151], 20.00th=[ 251], 00:46:21.683 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:46:21.683 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:46:21.683 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 652], 99.95th=[ 889], 00:46:21.683 | 99.99th=[ 1893] 00:46:21.683 bw ( KiB/s): min=13912, max=14387, per=20.12%, avg=14176.67, stdev=160.27, samples=6 00:46:21.683 iops : min= 3478, max= 3596, avg=3543.67, stdev=39.83, samples=6 00:46:21.683 lat (usec) : 250=19.41%, 500=80.44%, 750=0.06%, 1000=0.04% 00:46:21.683 lat (msec) : 2=0.02%, 4=0.01% 00:46:21.683 cpu : usr=0.50%, sys=3.02%, ctx=12486, majf=0, minf=1 00:46:21.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:21.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:21.683 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:21.683 issued rwts: total=12481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:21.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:21.683 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70921: Mon Dec 9 09:57:28 2024 00:46:21.683 read: IOPS=6482, BW=25.3MiB/s (26.6MB/s)(92.3MiB/3645msec) 00:46:21.683 slat (usec): min=6, max=11864, avg=11.56, stdev=138.34 00:46:21.683 clat (usec): min=93, max=1631, avg=141.88, stdev=22.06 00:46:21.683 lat (usec): min=101, max=12084, avg=153.44, stdev=140.91 00:46:21.683 clat percentiles (usec): 00:46:21.683 | 1.00th=[ 102], 5.00th=[ 112], 10.00th=[ 126], 20.00th=[ 135], 00:46:21.683 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:46:21.683 | 70.00th=[ 149], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:46:21.683 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 251], 99.95th=[ 412], 00:46:21.683 | 99.99th=[ 1020] 00:46:21.683 bw ( KiB/s): min=24846, max=27540, per=36.74%, avg=25885.43, stdev=812.31, samples=7 00:46:21.683 iops : min= 6211, max= 6885, avg=6471.29, stdev=203.18, samples=7 00:46:21.683 lat (usec) : 100=0.47%, 250=99.42%, 500=0.06%, 750=0.02% 00:46:21.683 lat (msec) : 2=0.02% 00:46:21.683 cpu : usr=1.02%, sys=5.02%, ctx=23646, majf=0, minf=1 00:46:21.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:21.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:21.683 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:21.683 issued rwts: total=23630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:21.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:21.683 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70922: Mon Dec 9 09:57:28 2024 00:46:21.683 read: IOPS=3521, BW=13.8MiB/s (14.4MB/s)(43.2MiB/3144msec) 00:46:21.683 slat (usec): min=7, max=14812, avg=18.24, stdev=161.92 00:46:21.683 clat (usec): min=114, max=2938, avg=264.22, stdev=54.85 00:46:21.683 lat (usec): min=122, max=15011, avg=282.46, stdev=170.13 00:46:21.683 clat percentiles (usec): 00:46:21.683 | 1.00th=[ 165], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 249], 00:46:21.683 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:46:21.683 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:46:21.683 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 408], 99.95th=[ 1106], 00:46:21.683 | 99.99th=[ 2900] 00:46:21.683 bw ( KiB/s): min=13980, max=14147, per=19.97%, avg=14071.50, stdev=65.08, samples=6 00:46:21.683 iops : min= 3495, max= 3536, avg=3517.50, stdev=16.20, samples=6 00:46:21.683 lat (usec) : 250=21.10%, 500=78.81%, 750=0.02%, 1000=0.01% 00:46:21.683 lat (msec) : 2=0.02%, 4=0.04% 00:46:21.683 cpu : usr=1.05%, sys=4.71%, ctx=11080, majf=0, minf=2 00:46:21.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:21.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:21.683 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:21.683 issued rwts: total=11073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:21.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:21.683 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70923: Mon Dec 9 09:57:28 2024 00:46:21.683 read: IOPS=5900, BW=23.0MiB/s (24.2MB/s)(66.5MiB/2886msec) 00:46:21.683 slat (nsec): min=6928, max=91487, avg=9475.62, stdev=2849.53 00:46:21.683 clat (usec): min=119, max=5594, avg=159.03, stdev=105.67 00:46:21.683 lat (usec): min=127, max=5602, avg=168.50, stdev=105.87 00:46:21.683 clat percentiles (usec): 00:46:21.683 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:46:21.683 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:46:21.683 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 176], 00:46:21.683 | 99.00th=[ 194], 99.50th=[ 204], 99.90th=[ 857], 99.95th=[ 3490], 00:46:21.683 | 99.99th=[ 5473] 00:46:21.683 bw ( KiB/s): min=22938, max=24159, per=33.61%, avg=23683.60, stdev=579.10, samples=5 00:46:21.683 iops : min= 5734, max= 6039, avg=5920.60, stdev=144.84, samples=5 00:46:21.683 lat (usec) : 250=99.81%, 500=0.04%, 750=0.04%, 1000=0.02% 00:46:21.683 lat (msec) : 2=0.04%, 4=0.04%, 10=0.03% 00:46:21.683 cpu : usr=0.76%, sys=4.85%, ctx=17030, majf=0, minf=2 00:46:21.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:21.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:21.683 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:21.683 issued rwts: total=17029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:21.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:21.683 00:46:21.683 Run status group 0 (all jobs): 00:46:21.683 READ: bw=68.8MiB/s (72.2MB/s), 13.8MiB/s-25.3MiB/s (14.4MB/s-26.6MB/s), io=251MiB (263MB), run=2886-3645msec 00:46:21.683 00:46:21.683 Disk stats (read/write): 00:46:21.683 nvme0n1: ios=11120/0, merge=0/0, ticks=3039/0, in_queue=3039, util=96.09% 00:46:21.683 nvme0n2: ios=23550/0, merge=0/0, ticks=3385/0, in_queue=3385, util=95.88% 00:46:21.683 nvme0n3: ios=11040/0, merge=0/0, ticks=2973/0, in_queue=2973, util=96.32% 00:46:21.683 nvme0n4: ios=17028/0, merge=0/0, ticks=2708/0, in_queue=2708, util=96.41% 00:46:21.683 09:57:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:21.683 09:57:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:46:21.941 09:57:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:21.941 09:57:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:46:22.508 09:57:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:22.509 09:57:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:46:22.509 09:57:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:22.509 09:57:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:46:22.768 09:57:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:22.768 09:57:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70876 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:23.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:46:23.348 nvmf hotplug test: fio failed as expected 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:46:23.348 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:23.608 rmmod nvme_tcp 00:46:23.608 rmmod nvme_fabrics 00:46:23.608 rmmod nvme_keyring 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 70385 ']' 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 70385 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 70385 ']' 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 70385 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70385 00:46:23.608 killing process with pid 70385 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70385' 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 70385 00:46:23.608 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 70385 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:23.866 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:23.867 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:46:23.867 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:46:23.867 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:46:23.867 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:46:23.867 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:46:23.867 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:46:23.867 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:46:23.867 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:23.867 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:24.125 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:46:24.125 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:24.125 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:24.125 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:24.125 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:46:24.125 ************************************ 00:46:24.125 END TEST nvmf_fio_target 00:46:24.125 ************************************ 00:46:24.125 00:46:24.125 real 0m19.842s 00:46:24.125 user 1m15.643s 00:46:24.125 sys 0m8.441s 00:46:24.125 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:24.125 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:24.125 09:57:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:46:24.125 09:57:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:24.125 09:57:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:24.125 09:57:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:46:24.125 ************************************ 00:46:24.125 START TEST nvmf_bdevio 00:46:24.125 ************************************ 00:46:24.125 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:46:24.125 * Looking for test storage... 00:46:24.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:24.125 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:24.125 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:46:24.125 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:24.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:24.385 --rc genhtml_branch_coverage=1 00:46:24.385 --rc genhtml_function_coverage=1 00:46:24.385 --rc genhtml_legend=1 00:46:24.385 --rc geninfo_all_blocks=1 00:46:24.385 --rc geninfo_unexecuted_blocks=1 00:46:24.385 00:46:24.385 ' 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:24.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:24.385 --rc genhtml_branch_coverage=1 00:46:24.385 --rc genhtml_function_coverage=1 00:46:24.385 --rc genhtml_legend=1 00:46:24.385 --rc geninfo_all_blocks=1 00:46:24.385 --rc geninfo_unexecuted_blocks=1 00:46:24.385 00:46:24.385 ' 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:24.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:24.385 --rc genhtml_branch_coverage=1 00:46:24.385 --rc genhtml_function_coverage=1 00:46:24.385 --rc genhtml_legend=1 00:46:24.385 --rc geninfo_all_blocks=1 00:46:24.385 --rc geninfo_unexecuted_blocks=1 00:46:24.385 00:46:24.385 ' 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:24.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:24.385 --rc genhtml_branch_coverage=1 00:46:24.385 --rc genhtml_function_coverage=1 00:46:24.385 --rc genhtml_legend=1 00:46:24.385 --rc geninfo_all_blocks=1 00:46:24.385 --rc geninfo_unexecuted_blocks=1 00:46:24.385 00:46:24.385 ' 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:24.385 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:24.386 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:46:24.386 Cannot find device "nvmf_init_br" 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:46:24.386 Cannot find device "nvmf_init_br2" 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:46:24.386 Cannot find device "nvmf_tgt_br" 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:46:24.386 Cannot find device "nvmf_tgt_br2" 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:46:24.386 Cannot find device "nvmf_init_br" 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:46:24.386 Cannot find device "nvmf_init_br2" 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:46:24.386 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:46:24.645 Cannot find device "nvmf_tgt_br" 00:46:24.645 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:46:24.645 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:46:24.645 Cannot find device "nvmf_tgt_br2" 00:46:24.645 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:46:24.645 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:46:24.645 Cannot find device "nvmf_br" 00:46:24.645 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:46:24.645 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:46:24.645 Cannot find device "nvmf_init_if" 00:46:24.645 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:46:24.646 Cannot find device "nvmf_init_if2" 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:24.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:24.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:46:24.646 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:24.646 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:46:24.646 00:46:24.646 --- 10.0.0.3 ping statistics --- 00:46:24.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:24.646 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:46:24.646 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:46:24.904 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:46:24.904 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:46:24.904 00:46:24.904 --- 10.0.0.4 ping statistics --- 00:46:24.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:24.904 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:24.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:24.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:46:24.904 00:46:24.904 --- 10.0.0.1 ping statistics --- 00:46:24.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:24.904 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:46:24.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:24.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:46:24.904 00:46:24.904 --- 10.0.0.2 ping statistics --- 00:46:24.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:24.904 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=71302 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 71302 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 71302 ']' 00:46:24.904 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:24.905 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:24.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:24.905 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:24.905 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:24.905 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:24.905 [2024-12-09 09:57:31.770562] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:46:24.905 [2024-12-09 09:57:31.770630] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:24.905 [2024-12-09 09:57:31.925976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:25.162 [2024-12-09 09:57:31.981244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:25.162 [2024-12-09 09:57:31.981315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:25.162 [2024-12-09 09:57:31.981323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:25.162 [2024-12-09 09:57:31.981329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:25.162 [2024-12-09 09:57:31.981333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:25.162 [2024-12-09 09:57:31.982335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:46:25.162 [2024-12-09 09:57:31.982468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:46:25.162 [2024-12-09 09:57:31.982597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:46:25.162 [2024-12-09 09:57:31.982603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:25.729 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:25.729 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:46:25.729 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:25.729 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:25.729 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:25.988 [2024-12-09 09:57:32.799971] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:25.988 Malloc0 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:25.988 [2024-12-09 09:57:32.878661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:25.988 { 00:46:25.988 "params": { 00:46:25.988 "name": "Nvme$subsystem", 00:46:25.988 "trtype": "$TEST_TRANSPORT", 00:46:25.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:25.988 "adrfam": "ipv4", 00:46:25.988 "trsvcid": "$NVMF_PORT", 00:46:25.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:25.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:25.988 "hdgst": ${hdgst:-false}, 00:46:25.988 "ddgst": ${ddgst:-false} 00:46:25.988 }, 00:46:25.988 "method": "bdev_nvme_attach_controller" 00:46:25.988 } 00:46:25.988 EOF 00:46:25.988 )") 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:46:25.988 09:57:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:25.988 "params": { 00:46:25.988 "name": "Nvme1", 00:46:25.988 "trtype": "tcp", 00:46:25.988 "traddr": "10.0.0.3", 00:46:25.988 "adrfam": "ipv4", 00:46:25.988 "trsvcid": "4420", 00:46:25.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:25.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:25.988 "hdgst": false, 00:46:25.988 "ddgst": false 00:46:25.988 }, 00:46:25.988 "method": "bdev_nvme_attach_controller" 00:46:25.988 }' 00:46:25.988 [2024-12-09 09:57:32.941063] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:46:25.988 [2024-12-09 09:57:32.941131] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71356 ] 00:46:26.247 [2024-12-09 09:57:33.092094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:26.247 [2024-12-09 09:57:33.151092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:26.247 [2024-12-09 09:57:33.151203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:26.247 [2024-12-09 09:57:33.151205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:26.505 I/O targets: 00:46:26.505 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:46:26.505 00:46:26.505 00:46:26.505 CUnit - A unit testing framework for C - Version 2.1-3 00:46:26.505 http://cunit.sourceforge.net/ 00:46:26.505 00:46:26.505 00:46:26.505 Suite: bdevio tests on: Nvme1n1 00:46:26.505 Test: blockdev write read block ...passed 00:46:26.505 Test: blockdev write zeroes read block ...passed 00:46:26.505 Test: blockdev write zeroes read no split ...passed 00:46:26.505 Test: blockdev write zeroes read split ...passed 00:46:26.505 Test: blockdev write zeroes read split partial ...passed 00:46:26.505 Test: blockdev reset ...[2024-12-09 09:57:33.444400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:46:26.505 [2024-12-09 09:57:33.444522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f4f50 (9): Bad file descriptor 00:46:26.505 passed 00:46:26.505 Test: blockdev write read 8 blocks ...passed 00:46:26.505 Test: blockdev write read size > 128k ...[2024-12-09 09:57:33.456010] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:46:26.505 passed 00:46:26.505 Test: blockdev write read invalid size ...passed 00:46:26.505 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:46:26.505 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:46:26.505 Test: blockdev write read max offset ...passed 00:46:26.764 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:46:26.764 Test: blockdev writev readv 8 blocks ...passed 00:46:26.764 Test: blockdev writev readv 30 x 1block ...passed 00:46:26.764 Test: blockdev writev readv block ...passed 00:46:26.764 Test: blockdev writev readv size > 128k ...passed 00:46:26.764 Test: blockdev writev readv size > 128k in two iovs ...passed 00:46:26.764 Test: blockdev comparev and writev ...[2024-12-09 09:57:33.629990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:26.764 [2024-12-09 09:57:33.630188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:26.764 [2024-12-09 09:57:33.630283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:26.764 [2024-12-09 09:57:33.630344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:46:26.764 [2024-12-09 09:57:33.630739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:26.764 [2024-12-09 09:57:33.630830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:46:26.764 [2024-12-09 09:57:33.630915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:26.764 [2024-12-09 09:57:33.630984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:46:26.764 [2024-12-09 09:57:33.631369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:26.764 [2024-12-09 09:57:33.631437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:46:26.764 [2024-12-09 09:57:33.631540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:26.764 [2024-12-09 09:57:33.631600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:46:26.764 [2024-12-09 09:57:33.631949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:26.764 [2024-12-09 09:57:33.632019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:46:26.764 [2024-12-09 09:57:33.632081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:26.764 [2024-12-09 09:57:33.632140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:46:26.764 passed 00:46:26.764 Test: blockdev nvme passthru rw ...passed 00:46:26.764 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:57:33.715798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:46:26.764 [2024-12-09 09:57:33.716039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:46:26.764 [2024-12-09 09:57:33.716251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:46:26.764 [2024-12-09 09:57:33.716325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:46:26.764 [2024-12-09 09:57:33.716507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:46:26.764 [2024-12-09 09:57:33.716579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:46:26.764 [2024-12-09 09:57:33.716743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:46:26.764 [2024-12-09 09:57:33.716806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:46:26.764 passed 00:46:26.764 Test: blockdev nvme admin passthru ...passed 00:46:26.764 Test: blockdev copy ...passed 00:46:26.764 00:46:26.764 Run Summary: Type Total Ran Passed Failed Inactive 00:46:26.764 suites 1 1 n/a 0 0 00:46:26.764 tests 23 23 23 0 0 00:46:26.764 asserts 152 152 152 0 n/a 00:46:26.764 00:46:26.764 Elapsed time = 0.923 seconds 00:46:27.022 09:57:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:27.022 09:57:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.022 09:57:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:27.022 09:57:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.022 09:57:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:46:27.022 09:57:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:46:27.022 09:57:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:27.022 09:57:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:46:27.022 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:27.022 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:46:27.022 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:27.022 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:27.022 rmmod nvme_tcp 00:46:27.022 rmmod nvme_fabrics 00:46:27.022 rmmod nvme_keyring 00:46:27.022 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:27.022 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:46:27.023 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:46:27.023 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 71302 ']' 00:46:27.023 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 71302 00:46:27.023 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 71302 ']' 00:46:27.023 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 71302 00:46:27.023 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:46:27.281 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:27.281 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71302 00:46:27.281 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:46:27.281 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:46:27.281 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71302' 00:46:27.281 killing process with pid 71302 00:46:27.281 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 71302 00:46:27.281 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 71302 00:46:27.282 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:27.282 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:27.282 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:27.282 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:46:27.282 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:46:27.282 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:27.282 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:46:27.282 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:27.282 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:27.282 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:46:27.540 00:46:27.540 real 0m3.534s 00:46:27.540 user 0m11.513s 00:46:27.540 sys 0m0.927s 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:27.540 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:27.540 ************************************ 00:46:27.540 END TEST nvmf_bdevio 00:46:27.540 ************************************ 00:46:27.799 09:57:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:46:27.799 00:46:27.799 real 3m35.515s 00:46:27.799 user 11m15.502s 00:46:27.799 sys 0m58.048s 00:46:27.799 09:57:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:27.799 09:57:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:46:27.799 ************************************ 00:46:27.799 END TEST nvmf_target_core 00:46:27.799 ************************************ 00:46:27.799 09:57:34 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:46:27.799 09:57:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:27.799 09:57:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:27.799 09:57:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:46:27.799 ************************************ 00:46:27.799 START TEST nvmf_target_extra 00:46:27.799 ************************************ 00:46:27.799 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:46:27.799 * Looking for test storage... 00:46:27.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:46:27.799 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:27.799 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:46:27.799 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:28.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:28.059 --rc genhtml_branch_coverage=1 00:46:28.059 --rc genhtml_function_coverage=1 00:46:28.059 --rc genhtml_legend=1 00:46:28.059 --rc geninfo_all_blocks=1 00:46:28.059 --rc geninfo_unexecuted_blocks=1 00:46:28.059 00:46:28.059 ' 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:28.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:28.059 --rc genhtml_branch_coverage=1 00:46:28.059 --rc genhtml_function_coverage=1 00:46:28.059 --rc genhtml_legend=1 00:46:28.059 --rc geninfo_all_blocks=1 00:46:28.059 --rc geninfo_unexecuted_blocks=1 00:46:28.059 00:46:28.059 ' 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:28.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:28.059 --rc genhtml_branch_coverage=1 00:46:28.059 --rc genhtml_function_coverage=1 00:46:28.059 --rc genhtml_legend=1 00:46:28.059 --rc geninfo_all_blocks=1 00:46:28.059 --rc geninfo_unexecuted_blocks=1 00:46:28.059 00:46:28.059 ' 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:28.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:28.059 --rc genhtml_branch_coverage=1 00:46:28.059 --rc genhtml_function_coverage=1 00:46:28.059 --rc genhtml_legend=1 00:46:28.059 --rc geninfo_all_blocks=1 00:46:28.059 --rc geninfo_unexecuted_blocks=1 00:46:28.059 00:46:28.059 ' 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:28.059 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:46:28.059 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:46:28.060 09:57:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:46:28.060 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:28.060 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:28.060 09:57:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:46:28.060 ************************************ 00:46:28.060 START TEST nvmf_example 00:46:28.060 ************************************ 00:46:28.060 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:46:28.060 * Looking for test storage... 00:46:28.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:28.060 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:28.060 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:46:28.060 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:28.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:28.320 --rc genhtml_branch_coverage=1 00:46:28.320 --rc genhtml_function_coverage=1 00:46:28.320 --rc genhtml_legend=1 00:46:28.320 --rc geninfo_all_blocks=1 00:46:28.320 --rc geninfo_unexecuted_blocks=1 00:46:28.320 00:46:28.320 ' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:28.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:28.320 --rc genhtml_branch_coverage=1 00:46:28.320 --rc genhtml_function_coverage=1 00:46:28.320 --rc genhtml_legend=1 00:46:28.320 --rc geninfo_all_blocks=1 00:46:28.320 --rc geninfo_unexecuted_blocks=1 00:46:28.320 00:46:28.320 ' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:28.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:28.320 --rc genhtml_branch_coverage=1 00:46:28.320 --rc genhtml_function_coverage=1 00:46:28.320 --rc genhtml_legend=1 00:46:28.320 --rc geninfo_all_blocks=1 00:46:28.320 --rc geninfo_unexecuted_blocks=1 00:46:28.320 00:46:28.320 ' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:28.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:28.320 --rc genhtml_branch_coverage=1 00:46:28.320 --rc genhtml_function_coverage=1 00:46:28.320 --rc genhtml_legend=1 00:46:28.320 --rc geninfo_all_blocks=1 00:46:28.320 --rc geninfo_unexecuted_blocks=1 00:46:28.320 00:46:28.320 ' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:28.320 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:28.320 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:46:28.321 Cannot find device "nvmf_init_br" 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:46:28.321 Cannot find device "nvmf_init_br2" 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:46:28.321 Cannot find device "nvmf_tgt_br" 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:46:28.321 Cannot find device "nvmf_tgt_br2" 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:46:28.321 Cannot find device "nvmf_init_br" 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:46:28.321 Cannot find device "nvmf_init_br2" 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:46:28.321 Cannot find device "nvmf_tgt_br" 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:46:28.321 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:46:28.624 Cannot find device "nvmf_tgt_br2" 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:46:28.624 Cannot find device "nvmf_br" 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:46:28.624 Cannot find device "nvmf_init_if" 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:46:28.624 Cannot find device "nvmf_init_if2" 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:28.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:28.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:46:28.624 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:28.624 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:46:28.624 00:46:28.624 --- 10.0.0.3 ping statistics --- 00:46:28.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:28.624 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:46:28.624 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:46:28.624 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.110 ms 00:46:28.624 00:46:28.624 --- 10.0.0.4 ping statistics --- 00:46:28.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:28.624 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:28.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:28.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:46:28.624 00:46:28.624 --- 10.0.0.1 ping statistics --- 00:46:28.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:28.624 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:46:28.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:28.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:46:28.624 00:46:28.624 --- 10.0.0.2 ping statistics --- 00:46:28.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:28.624 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:28.624 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=71651 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 71651 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 71651 ']' 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:28.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:28.903 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:29.840 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:46:29.841 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:29.841 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:29.841 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:46:29.841 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:29.841 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:46:29.841 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:29.841 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:46:29.841 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:29.841 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:46:29.841 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:46:42.073 Initializing NVMe Controllers 00:46:42.073 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:46:42.073 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:46:42.073 Initialization complete. Launching workers. 00:46:42.073 ======================================================== 00:46:42.073 Latency(us) 00:46:42.073 Device Information : IOPS MiB/s Average min max 00:46:42.073 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17362.00 67.82 3686.09 602.62 21104.71 00:46:42.073 ======================================================== 00:46:42.073 Total : 17362.00 67.82 3686.09 602.62 21104.71 00:46:42.073 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:42.073 rmmod nvme_tcp 00:46:42.073 rmmod nvme_fabrics 00:46:42.073 rmmod nvme_keyring 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 71651 ']' 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 71651 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 71651 ']' 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 71651 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71651 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:46:42.073 killing process with pid 71651 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71651' 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 71651 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 71651 00:46:42.073 nvmf threads initialize successfully 00:46:42.073 bdev subsystem init successfully 00:46:42.073 created a nvmf target service 00:46:42.073 create targets's poll groups done 00:46:42.073 all subsystems of target started 00:46:42.073 nvmf target is running 00:46:42.073 all subsystems of target stopped 00:46:42.073 destroy targets's poll groups done 00:46:42.073 destroyed the nvmf target service 00:46:42.073 bdev subsystem finish successfully 00:46:42.073 nvmf threads destroy successfully 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:46:42.073 ************************************ 00:46:42.073 END TEST nvmf_example 00:46:42.073 ************************************ 00:46:42.073 00:46:42.073 real 0m12.775s 00:46:42.073 user 0m44.854s 00:46:42.073 sys 0m1.915s 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:46:42.073 ************************************ 00:46:42.073 START TEST nvmf_filesystem 00:46:42.073 ************************************ 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:46:42.073 * Looking for test storage... 00:46:42.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:42.073 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:46:42.074 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:46:42.074 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:42.074 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:42.074 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:46:42.074 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:46:42.074 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:42.074 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:46:42.074 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:42.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:42.074 --rc genhtml_branch_coverage=1 00:46:42.074 --rc genhtml_function_coverage=1 00:46:42.074 --rc genhtml_legend=1 00:46:42.074 --rc geninfo_all_blocks=1 00:46:42.074 --rc geninfo_unexecuted_blocks=1 00:46:42.074 00:46:42.074 ' 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:42.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:42.074 --rc genhtml_branch_coverage=1 00:46:42.074 --rc genhtml_function_coverage=1 00:46:42.074 --rc genhtml_legend=1 00:46:42.074 --rc geninfo_all_blocks=1 00:46:42.074 --rc geninfo_unexecuted_blocks=1 00:46:42.074 00:46:42.074 ' 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:42.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:42.074 --rc genhtml_branch_coverage=1 00:46:42.074 --rc genhtml_function_coverage=1 00:46:42.074 --rc genhtml_legend=1 00:46:42.074 --rc geninfo_all_blocks=1 00:46:42.074 --rc geninfo_unexecuted_blocks=1 00:46:42.074 00:46:42.074 ' 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:42.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:42.074 --rc genhtml_branch_coverage=1 00:46:42.074 --rc genhtml_function_coverage=1 00:46:42.074 --rc genhtml_legend=1 00:46:42.074 --rc geninfo_all_blocks=1 00:46:42.074 --rc geninfo_unexecuted_blocks=1 00:46:42.074 00:46:42.074 ' 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:46:42.074 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:46:42.075 #define SPDK_CONFIG_H 00:46:42.075 #define SPDK_CONFIG_AIO_FSDEV 1 00:46:42.075 #define SPDK_CONFIG_APPS 1 00:46:42.075 #define SPDK_CONFIG_ARCH native 00:46:42.075 #undef SPDK_CONFIG_ASAN 00:46:42.075 #define SPDK_CONFIG_AVAHI 1 00:46:42.075 #undef SPDK_CONFIG_CET 00:46:42.075 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:46:42.075 #define SPDK_CONFIG_COVERAGE 1 00:46:42.075 #define SPDK_CONFIG_CROSS_PREFIX 00:46:42.075 #undef SPDK_CONFIG_CRYPTO 00:46:42.075 #undef SPDK_CONFIG_CRYPTO_MLX5 00:46:42.075 #undef SPDK_CONFIG_CUSTOMOCF 00:46:42.075 #undef SPDK_CONFIG_DAOS 00:46:42.075 #define SPDK_CONFIG_DAOS_DIR 00:46:42.075 #define SPDK_CONFIG_DEBUG 1 00:46:42.075 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:46:42.075 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:46:42.075 #define SPDK_CONFIG_DPDK_INC_DIR 00:46:42.075 #define SPDK_CONFIG_DPDK_LIB_DIR 00:46:42.075 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:46:42.075 #undef SPDK_CONFIG_DPDK_UADK 00:46:42.075 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:46:42.075 #define SPDK_CONFIG_EXAMPLES 1 00:46:42.075 #undef SPDK_CONFIG_FC 00:46:42.075 #define SPDK_CONFIG_FC_PATH 00:46:42.075 #define SPDK_CONFIG_FIO_PLUGIN 1 00:46:42.075 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:46:42.075 #define SPDK_CONFIG_FSDEV 1 00:46:42.075 #undef SPDK_CONFIG_FUSE 00:46:42.075 #undef SPDK_CONFIG_FUZZER 00:46:42.075 #define SPDK_CONFIG_FUZZER_LIB 00:46:42.075 #define SPDK_CONFIG_GOLANG 1 00:46:42.075 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:46:42.075 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:46:42.075 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:46:42.075 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:46:42.075 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:46:42.075 #undef SPDK_CONFIG_HAVE_LIBBSD 00:46:42.075 #undef SPDK_CONFIG_HAVE_LZ4 00:46:42.075 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:46:42.075 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:46:42.075 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:46:42.075 #define SPDK_CONFIG_IDXD 1 00:46:42.075 #define SPDK_CONFIG_IDXD_KERNEL 1 00:46:42.075 #undef SPDK_CONFIG_IPSEC_MB 00:46:42.075 #define SPDK_CONFIG_IPSEC_MB_DIR 00:46:42.075 #define SPDK_CONFIG_ISAL 1 00:46:42.075 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:46:42.075 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:46:42.075 #define SPDK_CONFIG_LIBDIR 00:46:42.075 #undef SPDK_CONFIG_LTO 00:46:42.075 #define SPDK_CONFIG_MAX_LCORES 128 00:46:42.075 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:46:42.075 #define SPDK_CONFIG_NVME_CUSE 1 00:46:42.075 #undef SPDK_CONFIG_OCF 00:46:42.075 #define SPDK_CONFIG_OCF_PATH 00:46:42.075 #define SPDK_CONFIG_OPENSSL_PATH 00:46:42.075 #undef SPDK_CONFIG_PGO_CAPTURE 00:46:42.075 #define SPDK_CONFIG_PGO_DIR 00:46:42.075 #undef SPDK_CONFIG_PGO_USE 00:46:42.075 #define SPDK_CONFIG_PREFIX /usr/local 00:46:42.075 #undef SPDK_CONFIG_RAID5F 00:46:42.075 #undef SPDK_CONFIG_RBD 00:46:42.075 #define SPDK_CONFIG_RDMA 1 00:46:42.075 #define SPDK_CONFIG_RDMA_PROV verbs 00:46:42.075 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:46:42.075 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:46:42.075 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:46:42.075 #define SPDK_CONFIG_SHARED 1 00:46:42.075 #undef SPDK_CONFIG_SMA 00:46:42.075 #define SPDK_CONFIG_TESTS 1 00:46:42.075 #undef SPDK_CONFIG_TSAN 00:46:42.075 #define SPDK_CONFIG_UBLK 1 00:46:42.075 #define SPDK_CONFIG_UBSAN 1 00:46:42.075 #undef SPDK_CONFIG_UNIT_TESTS 00:46:42.075 #undef SPDK_CONFIG_URING 00:46:42.075 #define SPDK_CONFIG_URING_PATH 00:46:42.075 #undef SPDK_CONFIG_URING_ZNS 00:46:42.075 #define SPDK_CONFIG_USDT 1 00:46:42.075 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:46:42.075 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:46:42.075 #undef SPDK_CONFIG_VFIO_USER 00:46:42.075 #define SPDK_CONFIG_VFIO_USER_DIR 00:46:42.075 #define SPDK_CONFIG_VHOST 1 00:46:42.075 #define SPDK_CONFIG_VIRTIO 1 00:46:42.075 #undef SPDK_CONFIG_VTUNE 00:46:42.075 #define SPDK_CONFIG_VTUNE_DIR 00:46:42.075 #define SPDK_CONFIG_WERROR 1 00:46:42.075 #define SPDK_CONFIG_WPDK_DIR 00:46:42.075 #undef SPDK_CONFIG_XNVME 00:46:42.075 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:42.075 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:46:42.076 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:46:42.077 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 71924 ]] 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 71924 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:46:42.078 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.aI9Jh8 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.aI9Jh8/tests/target /tmp/spdk.aI9Jh8 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13954002944 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5615030272 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256394240 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13954002944 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5615030272 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266290176 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=92964290560 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6738489344 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:46:42.079 * Looking for test storage... 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13954002944 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:42.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:42.079 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:42.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:42.080 --rc genhtml_branch_coverage=1 00:46:42.080 --rc genhtml_function_coverage=1 00:46:42.080 --rc genhtml_legend=1 00:46:42.080 --rc geninfo_all_blocks=1 00:46:42.080 --rc geninfo_unexecuted_blocks=1 00:46:42.080 00:46:42.080 ' 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:42.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:42.080 --rc genhtml_branch_coverage=1 00:46:42.080 --rc genhtml_function_coverage=1 00:46:42.080 --rc genhtml_legend=1 00:46:42.080 --rc geninfo_all_blocks=1 00:46:42.080 --rc geninfo_unexecuted_blocks=1 00:46:42.080 00:46:42.080 ' 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:42.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:42.080 --rc genhtml_branch_coverage=1 00:46:42.080 --rc genhtml_function_coverage=1 00:46:42.080 --rc genhtml_legend=1 00:46:42.080 --rc geninfo_all_blocks=1 00:46:42.080 --rc geninfo_unexecuted_blocks=1 00:46:42.080 00:46:42.080 ' 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:42.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:42.080 --rc genhtml_branch_coverage=1 00:46:42.080 --rc genhtml_function_coverage=1 00:46:42.080 --rc genhtml_legend=1 00:46:42.080 --rc geninfo_all_blocks=1 00:46:42.080 --rc geninfo_unexecuted_blocks=1 00:46:42.080 00:46:42.080 ' 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:42.080 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:42.080 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:46:42.081 Cannot find device "nvmf_init_br" 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:46:42.081 Cannot find device "nvmf_init_br2" 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:46:42.081 Cannot find device "nvmf_tgt_br" 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:46:42.081 Cannot find device "nvmf_tgt_br2" 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:46:42.081 Cannot find device "nvmf_init_br" 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:46:42.081 Cannot find device "nvmf_init_br2" 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:46:42.081 Cannot find device "nvmf_tgt_br" 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:46:42.081 Cannot find device "nvmf_tgt_br2" 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:46:42.081 Cannot find device "nvmf_br" 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:46:42.081 Cannot find device "nvmf_init_if" 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:46:42.081 Cannot find device "nvmf_init_if2" 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:42.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:42.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:46:42.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:42.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:46:42.081 00:46:42.081 --- 10.0.0.3 ping statistics --- 00:46:42.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:42.081 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:46:42.081 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:46:42.081 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:46:42.081 00:46:42.081 --- 10.0.0.4 ping statistics --- 00:46:42.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:42.081 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:42.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:42.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:46:42.081 00:46:42.081 --- 10.0.0.1 ping statistics --- 00:46:42.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:42.081 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:46:42.081 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:46:42.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:42.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:46:42.081 00:46:42.081 --- 10.0.0.2 ping statistics --- 00:46:42.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:42.081 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:46:42.082 ************************************ 00:46:42.082 START TEST nvmf_filesystem_no_in_capsule 00:46:42.082 ************************************ 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=72108 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 72108 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 72108 ']' 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:42.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:42.082 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:42.082 [2024-12-09 09:57:48.916393] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:46:42.082 [2024-12-09 09:57:48.916509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:42.082 [2024-12-09 09:57:49.074600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:42.341 [2024-12-09 09:57:49.128743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:42.341 [2024-12-09 09:57:49.128793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:42.341 [2024-12-09 09:57:49.128800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:42.341 [2024-12-09 09:57:49.128805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:42.341 [2024-12-09 09:57:49.128809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:42.341 [2024-12-09 09:57:49.129653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:42.341 [2024-12-09 09:57:49.129798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:42.341 [2024-12-09 09:57:49.129980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:42.341 [2024-12-09 09:57:49.130000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:42.911 [2024-12-09 09:57:49.898715] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.911 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:43.171 Malloc1 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:43.171 [2024-12-09 09:57:50.043993] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:46:43.171 { 00:46:43.171 "aliases": [ 00:46:43.171 "1b3895a7-7555-4855-b096-f14dacff9ca4" 00:46:43.171 ], 00:46:43.171 "assigned_rate_limits": { 00:46:43.171 "r_mbytes_per_sec": 0, 00:46:43.171 "rw_ios_per_sec": 0, 00:46:43.171 "rw_mbytes_per_sec": 0, 00:46:43.171 "w_mbytes_per_sec": 0 00:46:43.171 }, 00:46:43.171 "block_size": 512, 00:46:43.171 "claim_type": "exclusive_write", 00:46:43.171 "claimed": true, 00:46:43.171 "driver_specific": {}, 00:46:43.171 "memory_domains": [ 00:46:43.171 { 00:46:43.171 "dma_device_id": "system", 00:46:43.171 "dma_device_type": 1 00:46:43.171 }, 00:46:43.171 { 00:46:43.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:43.171 "dma_device_type": 2 00:46:43.171 } 00:46:43.171 ], 00:46:43.171 "name": "Malloc1", 00:46:43.171 "num_blocks": 1048576, 00:46:43.171 "product_name": "Malloc disk", 00:46:43.171 "supported_io_types": { 00:46:43.171 "abort": true, 00:46:43.171 "compare": false, 00:46:43.171 "compare_and_write": false, 00:46:43.171 "copy": true, 00:46:43.171 "flush": true, 00:46:43.171 "get_zone_info": false, 00:46:43.171 "nvme_admin": false, 00:46:43.171 "nvme_io": false, 00:46:43.171 "nvme_io_md": false, 00:46:43.171 "nvme_iov_md": false, 00:46:43.171 "read": true, 00:46:43.171 "reset": true, 00:46:43.171 "seek_data": false, 00:46:43.171 "seek_hole": false, 00:46:43.171 "unmap": true, 00:46:43.171 "write": true, 00:46:43.171 "write_zeroes": true, 00:46:43.171 "zcopy": true, 00:46:43.171 "zone_append": false, 00:46:43.171 "zone_management": false 00:46:43.171 }, 00:46:43.171 "uuid": "1b3895a7-7555-4855-b096-f14dacff9ca4", 00:46:43.171 "zoned": false 00:46:43.171 } 00:46:43.171 ]' 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:46:43.171 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:46:43.431 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:46:43.431 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:46:43.431 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:46:43.431 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:46:43.431 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:46:45.404 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:46:45.404 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:46:45.405 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:46:45.664 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:46:45.664 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:46.601 ************************************ 00:46:46.601 START TEST filesystem_ext4 00:46:46.601 ************************************ 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:46:46.601 mke2fs 1.47.0 (5-Feb-2023) 00:46:46.601 Discarding device blocks: 0/522240 done 00:46:46.601 Creating filesystem with 522240 1k blocks and 130560 inodes 00:46:46.601 Filesystem UUID: 4726be38-ad32-4ee8-8100-6eeade0c1ac7 00:46:46.601 Superblock backups stored on blocks: 00:46:46.601 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:46:46.601 00:46:46.601 Allocating group tables: 0/64 done 00:46:46.601 Writing inode tables: 0/64 done 00:46:46.601 Creating journal (8192 blocks): done 00:46:46.601 Writing superblocks and filesystem accounting information: 0/64 done 00:46:46.601 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:46:46.601 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:46:53.170 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:46:53.170 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 72108 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:46:53.170 ************************************ 00:46:53.170 END TEST filesystem_ext4 00:46:53.170 ************************************ 00:46:53.170 00:46:53.170 real 0m5.572s 00:46:53.170 user 0m0.031s 00:46:53.170 sys 0m0.080s 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:53.170 ************************************ 00:46:53.170 START TEST filesystem_btrfs 00:46:53.170 ************************************ 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:46:53.170 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:46:53.171 btrfs-progs v6.8.1 00:46:53.171 See https://btrfs.readthedocs.io for more information. 00:46:53.171 00:46:53.171 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:46:53.171 NOTE: several default settings have changed in version 5.15, please make sure 00:46:53.171 this does not affect your deployments: 00:46:53.171 - DUP for metadata (-m dup) 00:46:53.171 - enabled no-holes (-O no-holes) 00:46:53.171 - enabled free-space-tree (-R free-space-tree) 00:46:53.171 00:46:53.171 Label: (null) 00:46:53.171 UUID: 6048a9ab-000b-46cc-9b42-3dcf0720a956 00:46:53.171 Node size: 16384 00:46:53.171 Sector size: 4096 (CPU page size: 4096) 00:46:53.171 Filesystem size: 510.00MiB 00:46:53.171 Block group profiles: 00:46:53.171 Data: single 8.00MiB 00:46:53.171 Metadata: DUP 32.00MiB 00:46:53.171 System: DUP 8.00MiB 00:46:53.171 SSD detected: yes 00:46:53.171 Zoned device: no 00:46:53.171 Features: extref, skinny-metadata, no-holes, free-space-tree 00:46:53.171 Checksum: crc32c 00:46:53.171 Number of devices: 1 00:46:53.171 Devices: 00:46:53.171 ID SIZE PATH 00:46:53.171 1 510.00MiB /dev/nvme0n1p1 00:46:53.171 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 72108 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:46:53.171 ************************************ 00:46:53.171 END TEST filesystem_btrfs 00:46:53.171 ************************************ 00:46:53.171 00:46:53.171 real 0m0.307s 00:46:53.171 user 0m0.023s 00:46:53.171 sys 0m0.076s 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:53.171 ************************************ 00:46:53.171 START TEST filesystem_xfs 00:46:53.171 ************************************ 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:46:53.171 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:46:53.171 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:46:53.171 = sectsz=512 attr=2, projid32bit=1 00:46:53.171 = crc=1 finobt=1, sparse=1, rmapbt=0 00:46:53.171 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:46:53.171 data = bsize=4096 blocks=130560, imaxpct=25 00:46:53.171 = sunit=0 swidth=0 blks 00:46:53.171 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:46:53.171 log =internal log bsize=4096 blocks=16384, version=2 00:46:53.171 = sectsz=512 sunit=0 blks, lazy-count=1 00:46:53.171 realtime =none extsz=4096 blocks=0, rtextents=0 00:46:53.430 Discarding blocks...Done. 00:46:53.430 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:46:53.430 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 72108 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:46:55.973 ************************************ 00:46:55.973 END TEST filesystem_xfs 00:46:55.973 ************************************ 00:46:55.973 00:46:55.973 real 0m3.150s 00:46:55.973 user 0m0.026s 00:46:55.973 sys 0m0.066s 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:55.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 72108 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 72108 ']' 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 72108 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72108 00:46:55.973 killing process with pid 72108 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72108' 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 72108 00:46:55.973 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 72108 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:46:56.233 00:46:56.233 real 0m14.374s 00:46:56.233 user 0m55.682s 00:46:56.233 sys 0m1.651s 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:56.233 ************************************ 00:46:56.233 END TEST nvmf_filesystem_no_in_capsule 00:46:56.233 ************************************ 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:46:56.233 ************************************ 00:46:56.233 START TEST nvmf_filesystem_in_capsule 00:46:56.233 ************************************ 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:56.233 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:56.493 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=72481 00:46:56.493 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:46:56.493 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 72481 00:46:56.493 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 72481 ']' 00:46:56.493 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:56.493 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:56.493 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:56.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:56.493 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:56.493 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:56.493 [2024-12-09 09:58:03.341815] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:46:56.494 [2024-12-09 09:58:03.341989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:56.494 [2024-12-09 09:58:03.495260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:56.753 [2024-12-09 09:58:03.555088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:56.753 [2024-12-09 09:58:03.555145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:56.753 [2024-12-09 09:58:03.555154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:56.753 [2024-12-09 09:58:03.555159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:56.753 [2024-12-09 09:58:03.555164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:56.753 [2024-12-09 09:58:03.556109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:56.753 [2024-12-09 09:58:03.556201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:56.753 [2024-12-09 09:58:03.556308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:56.753 [2024-12-09 09:58:03.556313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:57.321 [2024-12-09 09:58:04.336654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.321 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:57.580 Malloc1 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:57.580 [2024-12-09 09:58:04.498538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:46:57.580 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:46:57.581 { 00:46:57.581 "aliases": [ 00:46:57.581 "bea4106a-1606-4492-957a-0e5716eb6de6" 00:46:57.581 ], 00:46:57.581 "assigned_rate_limits": { 00:46:57.581 "r_mbytes_per_sec": 0, 00:46:57.581 "rw_ios_per_sec": 0, 00:46:57.581 "rw_mbytes_per_sec": 0, 00:46:57.581 "w_mbytes_per_sec": 0 00:46:57.581 }, 00:46:57.581 "block_size": 512, 00:46:57.581 "claim_type": "exclusive_write", 00:46:57.581 "claimed": true, 00:46:57.581 "driver_specific": {}, 00:46:57.581 "memory_domains": [ 00:46:57.581 { 00:46:57.581 "dma_device_id": "system", 00:46:57.581 "dma_device_type": 1 00:46:57.581 }, 00:46:57.581 { 00:46:57.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:46:57.581 "dma_device_type": 2 00:46:57.581 } 00:46:57.581 ], 00:46:57.581 "name": "Malloc1", 00:46:57.581 "num_blocks": 1048576, 00:46:57.581 "product_name": "Malloc disk", 00:46:57.581 "supported_io_types": { 00:46:57.581 "abort": true, 00:46:57.581 "compare": false, 00:46:57.581 "compare_and_write": false, 00:46:57.581 "copy": true, 00:46:57.581 "flush": true, 00:46:57.581 "get_zone_info": false, 00:46:57.581 "nvme_admin": false, 00:46:57.581 "nvme_io": false, 00:46:57.581 "nvme_io_md": false, 00:46:57.581 "nvme_iov_md": false, 00:46:57.581 "read": true, 00:46:57.581 "reset": true, 00:46:57.581 "seek_data": false, 00:46:57.581 "seek_hole": false, 00:46:57.581 "unmap": true, 00:46:57.581 "write": true, 00:46:57.581 "write_zeroes": true, 00:46:57.581 "zcopy": true, 00:46:57.581 "zone_append": false, 00:46:57.581 "zone_management": false 00:46:57.581 }, 00:46:57.581 "uuid": "bea4106a-1606-4492-957a-0e5716eb6de6", 00:46:57.581 "zoned": false 00:46:57.581 } 00:46:57.581 ]' 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:46:57.581 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:46:57.840 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:46:57.840 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:46:57.840 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:46:57.840 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:46:57.840 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:46:59.815 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:46:59.816 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:47:00.075 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:47:00.075 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:47:01.014 ************************************ 00:47:01.014 START TEST filesystem_in_capsule_ext4 00:47:01.014 ************************************ 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:47:01.014 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:47:01.014 mke2fs 1.47.0 (5-Feb-2023) 00:47:01.014 Discarding device blocks: 0/522240 done 00:47:01.014 Creating filesystem with 522240 1k blocks and 130560 inodes 00:47:01.014 Filesystem UUID: bd0062ef-4eee-4d2c-a1c2-93e4e6549391 00:47:01.014 Superblock backups stored on blocks: 00:47:01.015 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:47:01.015 00:47:01.015 Allocating group tables: 0/64 done 00:47:01.015 Writing inode tables: 0/64 done 00:47:01.015 Creating journal (8192 blocks): done 00:47:01.274 Writing superblocks and filesystem accounting information: 0/64 done 00:47:01.274 00:47:01.274 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:47:01.274 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 72481 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:47:06.553 00:47:06.553 real 0m5.628s 00:47:06.553 user 0m0.035s 00:47:06.553 sys 0m0.083s 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:06.553 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:47:06.553 ************************************ 00:47:06.553 END TEST filesystem_in_capsule_ext4 00:47:06.553 ************************************ 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:47:06.814 ************************************ 00:47:06.814 START TEST filesystem_in_capsule_btrfs 00:47:06.814 ************************************ 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:47:06.814 btrfs-progs v6.8.1 00:47:06.814 See https://btrfs.readthedocs.io for more information. 00:47:06.814 00:47:06.814 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:47:06.814 NOTE: several default settings have changed in version 5.15, please make sure 00:47:06.814 this does not affect your deployments: 00:47:06.814 - DUP for metadata (-m dup) 00:47:06.814 - enabled no-holes (-O no-holes) 00:47:06.814 - enabled free-space-tree (-R free-space-tree) 00:47:06.814 00:47:06.814 Label: (null) 00:47:06.814 UUID: e4dbe158-9fdd-4c5b-840e-13f35b5685e8 00:47:06.814 Node size: 16384 00:47:06.814 Sector size: 4096 (CPU page size: 4096) 00:47:06.814 Filesystem size: 510.00MiB 00:47:06.814 Block group profiles: 00:47:06.814 Data: single 8.00MiB 00:47:06.814 Metadata: DUP 32.00MiB 00:47:06.814 System: DUP 8.00MiB 00:47:06.814 SSD detected: yes 00:47:06.814 Zoned device: no 00:47:06.814 Features: extref, skinny-metadata, no-holes, free-space-tree 00:47:06.814 Checksum: crc32c 00:47:06.814 Number of devices: 1 00:47:06.814 Devices: 00:47:06.814 ID SIZE PATH 00:47:06.814 1 510.00MiB /dev/nvme0n1p1 00:47:06.814 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:47:06.814 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 72481 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:47:07.075 ************************************ 00:47:07.075 END TEST filesystem_in_capsule_btrfs 00:47:07.075 ************************************ 00:47:07.075 00:47:07.075 real 0m0.275s 00:47:07.075 user 0m0.032s 00:47:07.075 sys 0m0.081s 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:47:07.075 ************************************ 00:47:07.075 START TEST filesystem_in_capsule_xfs 00:47:07.075 ************************************ 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:47:07.075 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:47:07.075 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:47:07.075 = sectsz=512 attr=2, projid32bit=1 00:47:07.075 = crc=1 finobt=1, sparse=1, rmapbt=0 00:47:07.075 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:47:07.075 data = bsize=4096 blocks=130560, imaxpct=25 00:47:07.075 = sunit=0 swidth=0 blks 00:47:07.075 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:47:07.075 log =internal log bsize=4096 blocks=16384, version=2 00:47:07.075 = sectsz=512 sunit=0 blks, lazy-count=1 00:47:07.075 realtime =none extsz=4096 blocks=0, rtextents=0 00:47:08.097 Discarding blocks...Done. 00:47:08.097 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:47:08.097 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 72481 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:47:10.006 00:47:10.006 real 0m2.625s 00:47:10.006 user 0m0.025s 00:47:10.006 sys 0m0.088s 00:47:10.006 ************************************ 00:47:10.006 END TEST filesystem_in_capsule_xfs 00:47:10.006 ************************************ 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:10.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 72481 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 72481 ']' 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 72481 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72481 00:47:10.006 killing process with pid 72481 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72481' 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 72481 00:47:10.006 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 72481 00:47:10.267 ************************************ 00:47:10.267 END TEST nvmf_filesystem_in_capsule 00:47:10.267 ************************************ 00:47:10.267 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:47:10.267 00:47:10.267 real 0m13.901s 00:47:10.267 user 0m53.825s 00:47:10.267 sys 0m1.776s 00:47:10.267 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:10.267 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:47:10.267 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:47:10.267 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:10.267 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:47:10.267 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:10.267 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:47:10.267 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:10.267 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:10.267 rmmod nvme_tcp 00:47:10.267 rmmod nvme_fabrics 00:47:10.525 rmmod nvme_keyring 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:10.525 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:47:10.784 00:47:10.784 real 0m29.869s 00:47:10.784 user 1m50.027s 00:47:10.784 sys 0m4.131s 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:47:10.784 ************************************ 00:47:10.784 END TEST nvmf_filesystem 00:47:10.784 ************************************ 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:10.784 ************************************ 00:47:10.784 START TEST nvmf_target_discovery 00:47:10.784 ************************************ 00:47:10.784 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:47:11.044 * Looking for test storage... 00:47:11.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:11.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:11.044 --rc genhtml_branch_coverage=1 00:47:11.044 --rc genhtml_function_coverage=1 00:47:11.044 --rc genhtml_legend=1 00:47:11.044 --rc geninfo_all_blocks=1 00:47:11.044 --rc geninfo_unexecuted_blocks=1 00:47:11.044 00:47:11.044 ' 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:11.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:11.044 --rc genhtml_branch_coverage=1 00:47:11.044 --rc genhtml_function_coverage=1 00:47:11.044 --rc genhtml_legend=1 00:47:11.044 --rc geninfo_all_blocks=1 00:47:11.044 --rc geninfo_unexecuted_blocks=1 00:47:11.044 00:47:11.044 ' 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:11.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:11.044 --rc genhtml_branch_coverage=1 00:47:11.044 --rc genhtml_function_coverage=1 00:47:11.044 --rc genhtml_legend=1 00:47:11.044 --rc geninfo_all_blocks=1 00:47:11.044 --rc geninfo_unexecuted_blocks=1 00:47:11.044 00:47:11.044 ' 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:11.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:11.044 --rc genhtml_branch_coverage=1 00:47:11.044 --rc genhtml_function_coverage=1 00:47:11.044 --rc genhtml_legend=1 00:47:11.044 --rc geninfo_all_blocks=1 00:47:11.044 --rc geninfo_unexecuted_blocks=1 00:47:11.044 00:47:11.044 ' 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:11.044 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:47:11.045 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:11.045 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:11.045 Cannot find device "nvmf_init_br" 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:11.045 Cannot find device "nvmf_init_br2" 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:11.045 Cannot find device "nvmf_tgt_br" 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:47:11.045 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:11.305 Cannot find device "nvmf_tgt_br2" 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:11.305 Cannot find device "nvmf_init_br" 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:11.305 Cannot find device "nvmf_init_br2" 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:11.305 Cannot find device "nvmf_tgt_br" 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:11.305 Cannot find device "nvmf_tgt_br2" 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:11.305 Cannot find device "nvmf_br" 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:11.305 Cannot find device "nvmf_init_if" 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:11.305 Cannot find device "nvmf_init_if2" 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:11.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:11.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:11.305 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:11.565 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:11.565 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:11.565 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:11.565 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:11.565 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:11.565 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:11.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:11.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:47:11.566 00:47:11.566 --- 10.0.0.3 ping statistics --- 00:47:11.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:11.566 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:11.566 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:11.566 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.105 ms 00:47:11.566 00:47:11.566 --- 10.0.0.4 ping statistics --- 00:47:11.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:11.566 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:11.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:11.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:47:11.566 00:47:11.566 --- 10.0.0.1 ping statistics --- 00:47:11.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:11.566 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:11.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:11.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:47:11.566 00:47:11.566 --- 10.0.0.2 ping statistics --- 00:47:11.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:11.566 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=73064 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 73064 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 73064 ']' 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:11.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:11.566 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:11.566 [2024-12-09 09:58:18.568241] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:47:11.566 [2024-12-09 09:58:18.568369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:11.823 [2024-12-09 09:58:18.726404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:11.823 [2024-12-09 09:58:18.782342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:11.823 [2024-12-09 09:58:18.782393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:11.823 [2024-12-09 09:58:18.782400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:11.823 [2024-12-09 09:58:18.782405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:11.823 [2024-12-09 09:58:18.782409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:11.823 [2024-12-09 09:58:18.783324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:11.823 [2024-12-09 09:58:18.783418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:11.823 [2024-12-09 09:58:18.783530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:11.823 [2024-12-09 09:58:18.783559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.769 [2024-12-09 09:58:19.559336] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.769 Null1 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.769 [2024-12-09 09:58:19.623368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.769 Null2 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.769 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.770 Null3 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.770 Null4 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:12.770 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -a 10.0.0.3 -s 4420 00:47:13.030 00:47:13.030 Discovery Log Number of Records 6, Generation counter 6 00:47:13.030 =====Discovery Log Entry 0====== 00:47:13.030 trtype: tcp 00:47:13.030 adrfam: ipv4 00:47:13.030 subtype: current discovery subsystem 00:47:13.030 treq: not required 00:47:13.030 portid: 0 00:47:13.030 trsvcid: 4420 00:47:13.030 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:47:13.030 traddr: 10.0.0.3 00:47:13.030 eflags: explicit discovery connections, duplicate discovery information 00:47:13.030 sectype: none 00:47:13.030 =====Discovery Log Entry 1====== 00:47:13.030 trtype: tcp 00:47:13.030 adrfam: ipv4 00:47:13.030 subtype: nvme subsystem 00:47:13.030 treq: not required 00:47:13.030 portid: 0 00:47:13.030 trsvcid: 4420 00:47:13.030 subnqn: nqn.2016-06.io.spdk:cnode1 00:47:13.030 traddr: 10.0.0.3 00:47:13.030 eflags: none 00:47:13.030 sectype: none 00:47:13.030 =====Discovery Log Entry 2====== 00:47:13.030 trtype: tcp 00:47:13.030 adrfam: ipv4 00:47:13.030 subtype: nvme subsystem 00:47:13.030 treq: not required 00:47:13.030 portid: 0 00:47:13.030 trsvcid: 4420 00:47:13.030 subnqn: nqn.2016-06.io.spdk:cnode2 00:47:13.030 traddr: 10.0.0.3 00:47:13.030 eflags: none 00:47:13.030 sectype: none 00:47:13.030 =====Discovery Log Entry 3====== 00:47:13.030 trtype: tcp 00:47:13.030 adrfam: ipv4 00:47:13.030 subtype: nvme subsystem 00:47:13.030 treq: not required 00:47:13.030 portid: 0 00:47:13.030 trsvcid: 4420 00:47:13.030 subnqn: nqn.2016-06.io.spdk:cnode3 00:47:13.030 traddr: 10.0.0.3 00:47:13.030 eflags: none 00:47:13.030 sectype: none 00:47:13.030 =====Discovery Log Entry 4====== 00:47:13.030 trtype: tcp 00:47:13.030 adrfam: ipv4 00:47:13.030 subtype: nvme subsystem 00:47:13.030 treq: not required 00:47:13.030 portid: 0 00:47:13.030 trsvcid: 4420 00:47:13.030 subnqn: nqn.2016-06.io.spdk:cnode4 00:47:13.030 traddr: 10.0.0.3 00:47:13.030 eflags: none 00:47:13.030 sectype: none 00:47:13.030 =====Discovery Log Entry 5====== 00:47:13.030 trtype: tcp 00:47:13.030 adrfam: ipv4 00:47:13.030 subtype: discovery subsystem referral 00:47:13.030 treq: not required 00:47:13.030 portid: 0 00:47:13.030 trsvcid: 4430 00:47:13.030 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:47:13.030 traddr: 10.0.0.3 00:47:13.030 eflags: none 00:47:13.030 sectype: none 00:47:13.030 Perform nvmf subsystem discovery via RPC 00:47:13.030 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:47:13.030 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:47:13.030 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:13.030 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:13.030 [ 00:47:13.030 { 00:47:13.030 "allow_any_host": true, 00:47:13.030 "hosts": [], 00:47:13.030 "listen_addresses": [ 00:47:13.030 { 00:47:13.030 "adrfam": "IPv4", 00:47:13.030 "traddr": "10.0.0.3", 00:47:13.030 "trsvcid": "4420", 00:47:13.030 "trtype": "TCP" 00:47:13.030 } 00:47:13.030 ], 00:47:13.030 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:47:13.030 "subtype": "Discovery" 00:47:13.030 }, 00:47:13.030 { 00:47:13.030 "allow_any_host": true, 00:47:13.030 "hosts": [], 00:47:13.030 "listen_addresses": [ 00:47:13.030 { 00:47:13.030 "adrfam": "IPv4", 00:47:13.030 "traddr": "10.0.0.3", 00:47:13.030 "trsvcid": "4420", 00:47:13.030 "trtype": "TCP" 00:47:13.030 } 00:47:13.030 ], 00:47:13.030 "max_cntlid": 65519, 00:47:13.030 "max_namespaces": 32, 00:47:13.030 "min_cntlid": 1, 00:47:13.030 "model_number": "SPDK bdev Controller", 00:47:13.030 "namespaces": [ 00:47:13.030 { 00:47:13.030 "bdev_name": "Null1", 00:47:13.030 "name": "Null1", 00:47:13.030 "nguid": "1976CEB122F14190A213DB436EC7B9EC", 00:47:13.030 "nsid": 1, 00:47:13.030 "uuid": "1976ceb1-22f1-4190-a213-db436ec7b9ec" 00:47:13.030 } 00:47:13.030 ], 00:47:13.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:13.030 "serial_number": "SPDK00000000000001", 00:47:13.030 "subtype": "NVMe" 00:47:13.030 }, 00:47:13.030 { 00:47:13.030 "allow_any_host": true, 00:47:13.030 "hosts": [], 00:47:13.030 "listen_addresses": [ 00:47:13.030 { 00:47:13.030 "adrfam": "IPv4", 00:47:13.030 "traddr": "10.0.0.3", 00:47:13.030 "trsvcid": "4420", 00:47:13.030 "trtype": "TCP" 00:47:13.030 } 00:47:13.030 ], 00:47:13.030 "max_cntlid": 65519, 00:47:13.030 "max_namespaces": 32, 00:47:13.030 "min_cntlid": 1, 00:47:13.030 "model_number": "SPDK bdev Controller", 00:47:13.030 "namespaces": [ 00:47:13.030 { 00:47:13.030 "bdev_name": "Null2", 00:47:13.030 "name": "Null2", 00:47:13.030 "nguid": "8B52F26D5D0B4510A88C4B9779111AEC", 00:47:13.030 "nsid": 1, 00:47:13.030 "uuid": "8b52f26d-5d0b-4510-a88c-4b9779111aec" 00:47:13.030 } 00:47:13.030 ], 00:47:13.030 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:47:13.030 "serial_number": "SPDK00000000000002", 00:47:13.030 "subtype": "NVMe" 00:47:13.030 }, 00:47:13.030 { 00:47:13.030 "allow_any_host": true, 00:47:13.030 "hosts": [], 00:47:13.030 "listen_addresses": [ 00:47:13.030 { 00:47:13.030 "adrfam": "IPv4", 00:47:13.030 "traddr": "10.0.0.3", 00:47:13.030 "trsvcid": "4420", 00:47:13.030 "trtype": "TCP" 00:47:13.030 } 00:47:13.030 ], 00:47:13.030 "max_cntlid": 65519, 00:47:13.030 "max_namespaces": 32, 00:47:13.030 "min_cntlid": 1, 00:47:13.031 "model_number": "SPDK bdev Controller", 00:47:13.031 "namespaces": [ 00:47:13.031 { 00:47:13.031 "bdev_name": "Null3", 00:47:13.031 "name": "Null3", 00:47:13.031 "nguid": "9C0B4D20D6014D18B7A336F087A4EE20", 00:47:13.031 "nsid": 1, 00:47:13.031 "uuid": "9c0b4d20-d601-4d18-b7a3-36f087a4ee20" 00:47:13.031 } 00:47:13.031 ], 00:47:13.031 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:47:13.031 "serial_number": "SPDK00000000000003", 00:47:13.031 "subtype": "NVMe" 00:47:13.031 }, 00:47:13.031 { 00:47:13.031 "allow_any_host": true, 00:47:13.031 "hosts": [], 00:47:13.031 "listen_addresses": [ 00:47:13.031 { 00:47:13.031 "adrfam": "IPv4", 00:47:13.031 "traddr": "10.0.0.3", 00:47:13.031 "trsvcid": "4420", 00:47:13.031 "trtype": "TCP" 00:47:13.031 } 00:47:13.031 ], 00:47:13.031 "max_cntlid": 65519, 00:47:13.031 "max_namespaces": 32, 00:47:13.031 "min_cntlid": 1, 00:47:13.031 "model_number": "SPDK bdev Controller", 00:47:13.031 "namespaces": [ 00:47:13.031 { 00:47:13.031 "bdev_name": "Null4", 00:47:13.031 "name": "Null4", 00:47:13.031 "nguid": "1B6FDDC8371B47C7A40C74361ABCCCAC", 00:47:13.031 "nsid": 1, 00:47:13.031 "uuid": "1b6fddc8-371b-47c7-a40c-74361abcccac" 00:47:13.031 } 00:47:13.031 ], 00:47:13.031 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:47:13.031 "serial_number": "SPDK00000000000004", 00:47:13.031 "subtype": "NVMe" 00:47:13.031 } 00:47:13.031 ] 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:13.031 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:13.031 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:13.290 rmmod nvme_tcp 00:47:13.290 rmmod nvme_fabrics 00:47:13.290 rmmod nvme_keyring 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 73064 ']' 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 73064 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 73064 ']' 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 73064 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73064 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73064' 00:47:13.290 killing process with pid 73064 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 73064 00:47:13.290 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 73064 00:47:13.548 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:13.548 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:13.548 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:13.548 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:47:13.548 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:47:13.548 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:13.549 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:47:13.808 00:47:13.808 real 0m2.957s 00:47:13.808 user 0m7.031s 00:47:13.808 sys 0m0.859s 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:13.808 ************************************ 00:47:13.808 END TEST nvmf_target_discovery 00:47:13.808 ************************************ 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:13.808 ************************************ 00:47:13.808 START TEST nvmf_referrals 00:47:13.808 ************************************ 00:47:13.808 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:47:14.068 * Looking for test storage... 00:47:14.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:14.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:14.068 --rc genhtml_branch_coverage=1 00:47:14.068 --rc genhtml_function_coverage=1 00:47:14.068 --rc genhtml_legend=1 00:47:14.068 --rc geninfo_all_blocks=1 00:47:14.068 --rc geninfo_unexecuted_blocks=1 00:47:14.068 00:47:14.068 ' 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:14.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:14.068 --rc genhtml_branch_coverage=1 00:47:14.068 --rc genhtml_function_coverage=1 00:47:14.068 --rc genhtml_legend=1 00:47:14.068 --rc geninfo_all_blocks=1 00:47:14.068 --rc geninfo_unexecuted_blocks=1 00:47:14.068 00:47:14.068 ' 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:14.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:14.068 --rc genhtml_branch_coverage=1 00:47:14.068 --rc genhtml_function_coverage=1 00:47:14.068 --rc genhtml_legend=1 00:47:14.068 --rc geninfo_all_blocks=1 00:47:14.068 --rc geninfo_unexecuted_blocks=1 00:47:14.068 00:47:14.068 ' 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:14.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:14.068 --rc genhtml_branch_coverage=1 00:47:14.068 --rc genhtml_function_coverage=1 00:47:14.068 --rc genhtml_legend=1 00:47:14.068 --rc geninfo_all_blocks=1 00:47:14.068 --rc geninfo_unexecuted_blocks=1 00:47:14.068 00:47:14.068 ' 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:14.068 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:47:14.068 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:14.069 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:14.069 Cannot find device "nvmf_init_br" 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:14.069 Cannot find device "nvmf_init_br2" 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:47:14.069 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:14.069 Cannot find device "nvmf_tgt_br" 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:14.328 Cannot find device "nvmf_tgt_br2" 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:14.328 Cannot find device "nvmf_init_br" 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:14.328 Cannot find device "nvmf_init_br2" 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:14.328 Cannot find device "nvmf_tgt_br" 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:14.328 Cannot find device "nvmf_tgt_br2" 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:14.328 Cannot find device "nvmf_br" 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:14.328 Cannot find device "nvmf_init_if" 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:14.328 Cannot find device "nvmf_init_if2" 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:14.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:14.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:14.328 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:14.588 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:14.589 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:14.589 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:47:14.589 00:47:14.589 --- 10.0.0.3 ping statistics --- 00:47:14.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:14.589 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:14.589 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:14.589 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:47:14.589 00:47:14.589 --- 10.0.0.4 ping statistics --- 00:47:14.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:14.589 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:14.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:14.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:47:14.589 00:47:14.589 --- 10.0.0.1 ping statistics --- 00:47:14.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:14.589 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:14.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:14.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:47:14.589 00:47:14.589 --- 10.0.0.2 ping statistics --- 00:47:14.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:14.589 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:14.589 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:14.848 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=73348 00:47:14.848 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:14.848 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 73348 00:47:14.848 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 73348 ']' 00:47:14.848 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:14.848 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:14.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:14.848 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:14.848 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:14.848 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:14.848 [2024-12-09 09:58:21.687875] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:47:14.848 [2024-12-09 09:58:21.687952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:14.848 [2024-12-09 09:58:21.841498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:15.106 [2024-12-09 09:58:21.900904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:15.106 [2024-12-09 09:58:21.901045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:15.106 [2024-12-09 09:58:21.901091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:15.106 [2024-12-09 09:58:21.901131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:15.106 [2024-12-09 09:58:21.901215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:15.106 [2024-12-09 09:58:21.902131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:15.106 [2024-12-09 09:58:21.902227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:15.106 [2024-12-09 09:58:21.902331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:15.106 [2024-12-09 09:58:21.902333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:15.674 [2024-12-09 09:58:22.701677] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.674 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:15.933 [2024-12-09 09:58:22.717874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:47:15.933 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.934 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:47:15.934 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:47:15.934 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:47:15.934 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:47:15.934 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:47:15.934 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:47:15.934 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:47:15.934 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -a 10.0.0.3 -s 8009 -o json 00:47:16.193 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:47:16.193 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:47:16.193 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:47:16.193 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.193 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:16.193 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.193 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:47:16.193 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.193 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -a 10.0.0.3 -s 8009 -o json 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:16.193 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -a 10.0.0.3 -s 8009 -o json 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -a 10.0.0.3 -s 8009 -o json 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:47:16.452 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -a 10.0.0.3 -s 8009 -o json 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -a 10.0.0.3 -s 8009 -o json 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:47:16.716 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -a 10.0.0.3 -s 8009 -o json 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -a 10.0.0.3 -s 8009 -o json 00:47:16.981 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -a 10.0.0.3 -s 8009 -o json 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:17.240 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:17.500 rmmod nvme_tcp 00:47:17.500 rmmod nvme_fabrics 00:47:17.500 rmmod nvme_keyring 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 73348 ']' 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 73348 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 73348 ']' 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 73348 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73348 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73348' 00:47:17.500 killing process with pid 73348 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 73348 00:47:17.500 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 73348 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:17.759 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:47:18.018 00:47:18.018 real 0m4.115s 00:47:18.018 user 0m12.111s 00:47:18.018 sys 0m1.150s 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:18.018 ************************************ 00:47:18.018 END TEST nvmf_referrals 00:47:18.018 ************************************ 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:18.018 ************************************ 00:47:18.018 START TEST nvmf_connect_disconnect 00:47:18.018 ************************************ 00:47:18.018 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:47:18.018 * Looking for test storage... 00:47:18.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:18.018 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:18.018 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:47:18.018 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:18.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:18.278 --rc genhtml_branch_coverage=1 00:47:18.278 --rc genhtml_function_coverage=1 00:47:18.278 --rc genhtml_legend=1 00:47:18.278 --rc geninfo_all_blocks=1 00:47:18.278 --rc geninfo_unexecuted_blocks=1 00:47:18.278 00:47:18.278 ' 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:18.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:18.278 --rc genhtml_branch_coverage=1 00:47:18.278 --rc genhtml_function_coverage=1 00:47:18.278 --rc genhtml_legend=1 00:47:18.278 --rc geninfo_all_blocks=1 00:47:18.278 --rc geninfo_unexecuted_blocks=1 00:47:18.278 00:47:18.278 ' 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:18.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:18.278 --rc genhtml_branch_coverage=1 00:47:18.278 --rc genhtml_function_coverage=1 00:47:18.278 --rc genhtml_legend=1 00:47:18.278 --rc geninfo_all_blocks=1 00:47:18.278 --rc geninfo_unexecuted_blocks=1 00:47:18.278 00:47:18.278 ' 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:18.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:18.278 --rc genhtml_branch_coverage=1 00:47:18.278 --rc genhtml_function_coverage=1 00:47:18.278 --rc genhtml_legend=1 00:47:18.278 --rc geninfo_all_blocks=1 00:47:18.278 --rc geninfo_unexecuted_blocks=1 00:47:18.278 00:47:18.278 ' 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:18.278 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:18.279 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:18.279 Cannot find device "nvmf_init_br" 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:18.279 Cannot find device "nvmf_init_br2" 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:18.279 Cannot find device "nvmf_tgt_br" 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:18.279 Cannot find device "nvmf_tgt_br2" 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:18.279 Cannot find device "nvmf_init_br" 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:18.279 Cannot find device "nvmf_init_br2" 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:18.279 Cannot find device "nvmf_tgt_br" 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:18.279 Cannot find device "nvmf_tgt_br2" 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:18.279 Cannot find device "nvmf_br" 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:18.279 Cannot find device "nvmf_init_if" 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:18.279 Cannot find device "nvmf_init_if2" 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:18.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:47:18.279 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:18.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:18.538 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:47:18.538 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:18.538 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:18.538 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:18.538 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:18.538 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:18.538 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:18.538 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:18.538 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:18.538 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:18.538 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:18.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:18.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:47:18.539 00:47:18.539 --- 10.0.0.3 ping statistics --- 00:47:18.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:18.539 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:18.539 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:18.539 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:47:18.539 00:47:18.539 --- 10.0.0.4 ping statistics --- 00:47:18.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:18.539 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:18.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:18.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:47:18.539 00:47:18.539 --- 10.0.0.1 ping statistics --- 00:47:18.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:18.539 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:18.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:18.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:47:18.539 00:47:18.539 --- 10.0.0.2 ping statistics --- 00:47:18.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:18.539 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=73708 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 73708 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 73708 ']' 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:18.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:18.539 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:47:18.798 [2024-12-09 09:58:25.620391] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:47:18.798 [2024-12-09 09:58:25.620500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:18.798 [2024-12-09 09:58:25.773219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:18.798 [2024-12-09 09:58:25.832598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:18.798 [2024-12-09 09:58:25.832644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:18.798 [2024-12-09 09:58:25.832652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:18.798 [2024-12-09 09:58:25.832658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:18.798 [2024-12-09 09:58:25.832663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:18.798 [2024-12-09 09:58:25.833527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:18.798 [2024-12-09 09:58:25.833567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:18.798 [2024-12-09 09:58:25.833662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:18.798 [2024-12-09 09:58:25.833733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:47:19.732 [2024-12-09 09:58:26.680537] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:19.732 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:47:19.733 [2024-12-09 09:58:26.770838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:19.991 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:19.991 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:47:19.991 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:47:19.991 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:47:22.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:24.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:26.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:28.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:31.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:31.395 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:47:31.395 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:47:31.395 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:31.395 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:47:31.395 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:31.395 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:47:31.395 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:31.395 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:31.395 rmmod nvme_tcp 00:47:31.395 rmmod nvme_fabrics 00:47:31.395 rmmod nvme_keyring 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 73708 ']' 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 73708 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 73708 ']' 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 73708 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73708 00:47:31.396 killing process with pid 73708 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73708' 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 73708 00:47:31.396 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 73708 00:47:31.660 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:31.660 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:31.660 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:31.660 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:47:31.660 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:31.660 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:47:31.660 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:47:31.660 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:31.660 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:31.660 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:31.660 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:31.661 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:31.661 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:31.661 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:31.661 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:31.661 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:31.661 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:31.661 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:31.661 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:31.661 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:31.661 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:47:31.920 00:47:31.920 real 0m13.849s 00:47:31.920 user 0m50.316s 00:47:31.920 sys 0m1.532s 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:47:31.920 ************************************ 00:47:31.920 END TEST nvmf_connect_disconnect 00:47:31.920 ************************************ 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:31.920 ************************************ 00:47:31.920 START TEST nvmf_multitarget 00:47:31.920 ************************************ 00:47:31.920 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:47:31.920 * Looking for test storage... 00:47:32.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:32.180 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:32.180 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:47:32.180 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:47:32.180 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:32.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:32.181 --rc genhtml_branch_coverage=1 00:47:32.181 --rc genhtml_function_coverage=1 00:47:32.181 --rc genhtml_legend=1 00:47:32.181 --rc geninfo_all_blocks=1 00:47:32.181 --rc geninfo_unexecuted_blocks=1 00:47:32.181 00:47:32.181 ' 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:32.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:32.181 --rc genhtml_branch_coverage=1 00:47:32.181 --rc genhtml_function_coverage=1 00:47:32.181 --rc genhtml_legend=1 00:47:32.181 --rc geninfo_all_blocks=1 00:47:32.181 --rc geninfo_unexecuted_blocks=1 00:47:32.181 00:47:32.181 ' 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:32.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:32.181 --rc genhtml_branch_coverage=1 00:47:32.181 --rc genhtml_function_coverage=1 00:47:32.181 --rc genhtml_legend=1 00:47:32.181 --rc geninfo_all_blocks=1 00:47:32.181 --rc geninfo_unexecuted_blocks=1 00:47:32.181 00:47:32.181 ' 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:32.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:32.181 --rc genhtml_branch_coverage=1 00:47:32.181 --rc genhtml_function_coverage=1 00:47:32.181 --rc genhtml_legend=1 00:47:32.181 --rc geninfo_all_blocks=1 00:47:32.181 --rc geninfo_unexecuted_blocks=1 00:47:32.181 00:47:32.181 ' 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:32.181 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:32.181 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:32.182 Cannot find device "nvmf_init_br" 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:32.182 Cannot find device "nvmf_init_br2" 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:32.182 Cannot find device "nvmf_tgt_br" 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:32.182 Cannot find device "nvmf_tgt_br2" 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:32.182 Cannot find device "nvmf_init_br" 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:32.182 Cannot find device "nvmf_init_br2" 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:32.182 Cannot find device "nvmf_tgt_br" 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:32.182 Cannot find device "nvmf_tgt_br2" 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:47:32.182 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:32.442 Cannot find device "nvmf_br" 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:32.442 Cannot find device "nvmf_init_if" 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:32.442 Cannot find device "nvmf_init_if2" 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:32.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:32.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:32.442 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:32.442 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:47:32.442 00:47:32.442 --- 10.0.0.3 ping statistics --- 00:47:32.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:32.442 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:32.442 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:32.442 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:47:32.442 00:47:32.442 --- 10.0.0.4 ping statistics --- 00:47:32.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:32.442 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:32.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:32.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:47:32.442 00:47:32.442 --- 10.0.0.1 ping statistics --- 00:47:32.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:32.442 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:32.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:32.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:47:32.442 00:47:32.442 --- 10.0.0.2 ping statistics --- 00:47:32.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:32.442 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:32.442 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:32.443 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:32.443 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:32.443 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:32.443 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:32.443 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=74171 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 74171 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 74171 ']' 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:32.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:32.702 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:47:32.702 [2024-12-09 09:58:39.568291] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:47:32.702 [2024-12-09 09:58:39.568416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:32.702 [2024-12-09 09:58:39.727914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:32.961 [2024-12-09 09:58:39.786917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:32.961 [2024-12-09 09:58:39.786971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:32.961 [2024-12-09 09:58:39.786978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:32.961 [2024-12-09 09:58:39.786984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:32.961 [2024-12-09 09:58:39.786988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:32.961 [2024-12-09 09:58:39.787828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:32.961 [2024-12-09 09:58:39.787940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:32.961 [2024-12-09 09:58:39.788025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:32.961 [2024-12-09 09:58:39.788039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:33.529 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:33.529 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:47:33.529 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:33.529 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:33.529 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:47:33.529 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:33.529 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:47:33.529 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:47:33.529 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:47:33.790 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:47:33.790 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:47:33.790 "nvmf_tgt_1" 00:47:34.049 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:47:34.049 "nvmf_tgt_2" 00:47:34.049 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:47:34.049 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:47:34.308 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:47:34.308 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:47:34.308 true 00:47:34.308 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:47:34.566 true 00:47:34.566 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:47:34.566 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:47:34.566 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:47:34.566 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:47:34.566 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:47:34.566 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:34.566 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:47:34.566 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:34.566 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:47:34.566 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:34.566 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:34.826 rmmod nvme_tcp 00:47:34.826 rmmod nvme_fabrics 00:47:34.826 rmmod nvme_keyring 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 74171 ']' 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 74171 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 74171 ']' 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 74171 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74171 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:34.826 killing process with pid 74171 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74171' 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 74171 00:47:34.826 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 74171 00:47:35.098 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:35.098 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:35.098 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:35.098 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:47:35.098 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:47:35.098 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:35.098 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:47:35.098 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:35.098 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:35.098 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:35.098 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:35.098 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:35.098 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:35.098 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:35.098 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:35.098 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:35.098 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:35.098 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:35.098 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:35.375 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:47:35.376 ************************************ 00:47:35.376 END TEST nvmf_multitarget 00:47:35.376 ************************************ 00:47:35.376 00:47:35.376 real 0m3.436s 00:47:35.376 user 0m10.120s 00:47:35.376 sys 0m0.858s 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:35.376 ************************************ 00:47:35.376 START TEST nvmf_rpc 00:47:35.376 ************************************ 00:47:35.376 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:47:35.635 * Looking for test storage... 00:47:35.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:35.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:35.635 --rc genhtml_branch_coverage=1 00:47:35.635 --rc genhtml_function_coverage=1 00:47:35.635 --rc genhtml_legend=1 00:47:35.635 --rc geninfo_all_blocks=1 00:47:35.635 --rc geninfo_unexecuted_blocks=1 00:47:35.635 00:47:35.635 ' 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:35.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:35.635 --rc genhtml_branch_coverage=1 00:47:35.635 --rc genhtml_function_coverage=1 00:47:35.635 --rc genhtml_legend=1 00:47:35.635 --rc geninfo_all_blocks=1 00:47:35.635 --rc geninfo_unexecuted_blocks=1 00:47:35.635 00:47:35.635 ' 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:35.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:35.635 --rc genhtml_branch_coverage=1 00:47:35.635 --rc genhtml_function_coverage=1 00:47:35.635 --rc genhtml_legend=1 00:47:35.635 --rc geninfo_all_blocks=1 00:47:35.635 --rc geninfo_unexecuted_blocks=1 00:47:35.635 00:47:35.635 ' 00:47:35.635 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:35.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:35.635 --rc genhtml_branch_coverage=1 00:47:35.635 --rc genhtml_function_coverage=1 00:47:35.635 --rc genhtml_legend=1 00:47:35.635 --rc geninfo_all_blocks=1 00:47:35.635 --rc geninfo_unexecuted_blocks=1 00:47:35.635 00:47:35.635 ' 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:35.636 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:35.636 Cannot find device "nvmf_init_br" 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:35.636 Cannot find device "nvmf_init_br2" 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:35.636 Cannot find device "nvmf_tgt_br" 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:47:35.636 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:35.896 Cannot find device "nvmf_tgt_br2" 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:35.896 Cannot find device "nvmf_init_br" 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:35.896 Cannot find device "nvmf_init_br2" 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:35.896 Cannot find device "nvmf_tgt_br" 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:35.896 Cannot find device "nvmf_tgt_br2" 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:35.896 Cannot find device "nvmf_br" 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:35.896 Cannot find device "nvmf_init_if" 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:35.896 Cannot find device "nvmf_init_if2" 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:35.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:35.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:35.896 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:35.897 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:35.897 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:36.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:36.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:47:36.157 00:47:36.157 --- 10.0.0.3 ping statistics --- 00:47:36.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:36.157 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:36.157 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:36.157 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:47:36.157 00:47:36.157 --- 10.0.0.4 ping statistics --- 00:47:36.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:36.157 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:36.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:36.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:47:36.157 00:47:36.157 --- 10.0.0.1 ping statistics --- 00:47:36.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:36.157 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:36.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:36.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:47:36.157 00:47:36.157 --- 10.0.0.2 ping statistics --- 00:47:36.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:36.157 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=74455 00:47:36.157 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 74455 00:47:36.157 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 74455 ']' 00:47:36.157 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:36.157 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:36.157 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:36.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:36.157 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:36.157 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:36.157 [2024-12-09 09:58:43.058012] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:47:36.157 [2024-12-09 09:58:43.058127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:36.417 [2024-12-09 09:58:43.218249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:36.417 [2024-12-09 09:58:43.277387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:36.417 [2024-12-09 09:58:43.277549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:36.417 [2024-12-09 09:58:43.277597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:36.417 [2024-12-09 09:58:43.277629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:36.417 [2024-12-09 09:58:43.277652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:36.417 [2024-12-09 09:58:43.278599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:36.417 [2024-12-09 09:58:43.278734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:36.417 [2024-12-09 09:58:43.278735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:36.417 [2024-12-09 09:58:43.278693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:37.351 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:37.351 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:47:37.351 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:37.351 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:37.351 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:37.351 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:37.351 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:47:37.351 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.351 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:37.351 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.351 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:47:37.351 "poll_groups": [ 00:47:37.352 { 00:47:37.352 "admin_qpairs": 0, 00:47:37.352 "completed_nvme_io": 0, 00:47:37.352 "current_admin_qpairs": 0, 00:47:37.352 "current_io_qpairs": 0, 00:47:37.352 "io_qpairs": 0, 00:47:37.352 "name": "nvmf_tgt_poll_group_000", 00:47:37.352 "pending_bdev_io": 0, 00:47:37.352 "transports": [] 00:47:37.352 }, 00:47:37.352 { 00:47:37.352 "admin_qpairs": 0, 00:47:37.352 "completed_nvme_io": 0, 00:47:37.352 "current_admin_qpairs": 0, 00:47:37.352 "current_io_qpairs": 0, 00:47:37.352 "io_qpairs": 0, 00:47:37.352 "name": "nvmf_tgt_poll_group_001", 00:47:37.352 "pending_bdev_io": 0, 00:47:37.352 "transports": [] 00:47:37.352 }, 00:47:37.352 { 00:47:37.352 "admin_qpairs": 0, 00:47:37.352 "completed_nvme_io": 0, 00:47:37.352 "current_admin_qpairs": 0, 00:47:37.352 "current_io_qpairs": 0, 00:47:37.352 "io_qpairs": 0, 00:47:37.352 "name": "nvmf_tgt_poll_group_002", 00:47:37.352 "pending_bdev_io": 0, 00:47:37.352 "transports": [] 00:47:37.352 }, 00:47:37.352 { 00:47:37.352 "admin_qpairs": 0, 00:47:37.352 "completed_nvme_io": 0, 00:47:37.352 "current_admin_qpairs": 0, 00:47:37.352 "current_io_qpairs": 0, 00:47:37.352 "io_qpairs": 0, 00:47:37.352 "name": "nvmf_tgt_poll_group_003", 00:47:37.352 "pending_bdev_io": 0, 00:47:37.352 "transports": [] 00:47:37.352 } 00:47:37.352 ], 00:47:37.352 "tick_rate": 2290000000 00:47:37.352 }' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:37.352 [2024-12-09 09:58:44.229961] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:47:37.352 "poll_groups": [ 00:47:37.352 { 00:47:37.352 "admin_qpairs": 0, 00:47:37.352 "completed_nvme_io": 0, 00:47:37.352 "current_admin_qpairs": 0, 00:47:37.352 "current_io_qpairs": 0, 00:47:37.352 "io_qpairs": 0, 00:47:37.352 "name": "nvmf_tgt_poll_group_000", 00:47:37.352 "pending_bdev_io": 0, 00:47:37.352 "transports": [ 00:47:37.352 { 00:47:37.352 "trtype": "TCP" 00:47:37.352 } 00:47:37.352 ] 00:47:37.352 }, 00:47:37.352 { 00:47:37.352 "admin_qpairs": 0, 00:47:37.352 "completed_nvme_io": 0, 00:47:37.352 "current_admin_qpairs": 0, 00:47:37.352 "current_io_qpairs": 0, 00:47:37.352 "io_qpairs": 0, 00:47:37.352 "name": "nvmf_tgt_poll_group_001", 00:47:37.352 "pending_bdev_io": 0, 00:47:37.352 "transports": [ 00:47:37.352 { 00:47:37.352 "trtype": "TCP" 00:47:37.352 } 00:47:37.352 ] 00:47:37.352 }, 00:47:37.352 { 00:47:37.352 "admin_qpairs": 0, 00:47:37.352 "completed_nvme_io": 0, 00:47:37.352 "current_admin_qpairs": 0, 00:47:37.352 "current_io_qpairs": 0, 00:47:37.352 "io_qpairs": 0, 00:47:37.352 "name": "nvmf_tgt_poll_group_002", 00:47:37.352 "pending_bdev_io": 0, 00:47:37.352 "transports": [ 00:47:37.352 { 00:47:37.352 "trtype": "TCP" 00:47:37.352 } 00:47:37.352 ] 00:47:37.352 }, 00:47:37.352 { 00:47:37.352 "admin_qpairs": 0, 00:47:37.352 "completed_nvme_io": 0, 00:47:37.352 "current_admin_qpairs": 0, 00:47:37.352 "current_io_qpairs": 0, 00:47:37.352 "io_qpairs": 0, 00:47:37.352 "name": "nvmf_tgt_poll_group_003", 00:47:37.352 "pending_bdev_io": 0, 00:47:37.352 "transports": [ 00:47:37.352 { 00:47:37.352 "trtype": "TCP" 00:47:37.352 } 00:47:37.352 ] 00:47:37.352 } 00:47:37.352 ], 00:47:37.352 "tick_rate": 2290000000 00:47:37.352 }' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.352 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:37.611 Malloc1 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:37.611 [2024-12-09 09:58:44.438420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -a 10.0.0.3 -s 4420 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -a 10.0.0.3 -s 4420 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:47:37.611 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -a 10.0.0.3 -s 4420 00:47:37.612 [2024-12-09 09:58:44.466853] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541' 00:47:37.612 Failed to write to /dev/nvme-fabrics: Input/output error 00:47:37.612 could not add new controller: failed to write to nvme-fabrics device 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:37.612 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:47:37.871 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:47:37.871 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:47:37.871 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:47:37.871 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:47:37.871 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:47:39.769 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:47:39.769 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:47:39.769 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:47:39.769 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:47:39.769 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:47:39.769 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:47:39.769 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:39.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:39.769 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:39.769 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:47:39.769 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:47:39.769 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:39.769 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:47:39.770 [2024-12-09 09:58:46.783690] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541' 00:47:39.770 Failed to write to /dev/nvme-fabrics: Input/output error 00:47:39.770 could not add new controller: failed to write to nvme-fabrics device 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:39.770 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:47:40.028 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:47:40.028 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:47:40.028 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:47:40.028 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:47:40.028 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:47:41.928 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:47:41.928 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:47:41.928 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:47:42.186 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:47:42.186 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:47:42.186 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:47:42.186 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:42.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:42.186 [2024-12-09 09:58:49.194061] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:42.186 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:47:42.444 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:47:42.444 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:47:42.444 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:47:42.444 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:47:42.444 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:44.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:44.974 [2024-12-09 09:58:51.493406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:47:44.974 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:46.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:46.873 [2024-12-09 09:58:53.900766] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:46.873 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:47.130 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:47.130 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:47:47.130 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:47:47.130 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:47:47.130 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:47:47.130 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:47:47.130 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:49.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:49.664 [2024-12-09 09:58:56.299858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:47:49.664 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:51.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:51.563 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:51.821 [2024-12-09 09:58:58.623211] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:47:51.821 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:54.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 [2024-12-09 09:59:00.951168] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 [2024-12-09 09:59:00.999164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 [2024-12-09 09:59:01.047140] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.353 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 [2024-12-09 09:59:01.095137] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 [2024-12-09 09:59:01.147157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:47:54.354 "poll_groups": [ 00:47:54.354 { 00:47:54.354 "admin_qpairs": 2, 00:47:54.354 "completed_nvme_io": 151, 00:47:54.354 "current_admin_qpairs": 0, 00:47:54.354 "current_io_qpairs": 0, 00:47:54.354 "io_qpairs": 16, 00:47:54.354 "name": "nvmf_tgt_poll_group_000", 00:47:54.354 "pending_bdev_io": 0, 00:47:54.354 "transports": [ 00:47:54.354 { 00:47:54.354 "trtype": "TCP" 00:47:54.354 } 00:47:54.354 ] 00:47:54.354 }, 00:47:54.354 { 00:47:54.354 "admin_qpairs": 3, 00:47:54.354 "completed_nvme_io": 68, 00:47:54.354 "current_admin_qpairs": 0, 00:47:54.354 "current_io_qpairs": 0, 00:47:54.354 "io_qpairs": 17, 00:47:54.354 "name": "nvmf_tgt_poll_group_001", 00:47:54.354 "pending_bdev_io": 0, 00:47:54.354 "transports": [ 00:47:54.354 { 00:47:54.354 "trtype": "TCP" 00:47:54.354 } 00:47:54.354 ] 00:47:54.354 }, 00:47:54.354 { 00:47:54.354 "admin_qpairs": 1, 00:47:54.354 "completed_nvme_io": 119, 00:47:54.354 "current_admin_qpairs": 0, 00:47:54.354 "current_io_qpairs": 0, 00:47:54.354 "io_qpairs": 19, 00:47:54.354 "name": "nvmf_tgt_poll_group_002", 00:47:54.354 "pending_bdev_io": 0, 00:47:54.354 "transports": [ 00:47:54.354 { 00:47:54.354 "trtype": "TCP" 00:47:54.354 } 00:47:54.354 ] 00:47:54.354 }, 00:47:54.354 { 00:47:54.354 "admin_qpairs": 1, 00:47:54.354 "completed_nvme_io": 82, 00:47:54.354 "current_admin_qpairs": 0, 00:47:54.354 "current_io_qpairs": 0, 00:47:54.354 "io_qpairs": 18, 00:47:54.354 "name": "nvmf_tgt_poll_group_003", 00:47:54.354 "pending_bdev_io": 0, 00:47:54.354 "transports": [ 00:47:54.354 { 00:47:54.354 "trtype": "TCP" 00:47:54.354 } 00:47:54.354 ] 00:47:54.354 } 00:47:54.354 ], 00:47:54.354 "tick_rate": 2290000000 00:47:54.354 }' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:54.354 rmmod nvme_tcp 00:47:54.354 rmmod nvme_fabrics 00:47:54.354 rmmod nvme_keyring 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 74455 ']' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 74455 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 74455 ']' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 74455 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:54.354 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74455 00:47:54.612 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:54.612 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:54.612 killing process with pid 74455 00:47:54.612 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74455' 00:47:54.612 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 74455 00:47:54.612 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 74455 00:47:54.613 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:54.613 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:54.613 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:54.613 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:47:54.613 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:47:54.613 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:54.613 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:47:54.613 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:54.613 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:54.613 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:54.613 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:54.613 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:54.870 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:47:54.870 00:47:54.870 real 0m19.565s 00:47:54.870 user 1m13.062s 00:47:54.870 sys 0m2.290s 00:47:55.128 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:55.128 ************************************ 00:47:55.128 END TEST nvmf_rpc 00:47:55.128 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:55.128 ************************************ 00:47:55.128 09:59:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:47:55.128 09:59:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:55.128 09:59:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:55.128 09:59:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:55.128 ************************************ 00:47:55.128 START TEST nvmf_invalid 00:47:55.128 ************************************ 00:47:55.128 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:47:55.128 * Looking for test storage... 00:47:55.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:55.128 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:55.128 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:47:55.128 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:55.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:55.389 --rc genhtml_branch_coverage=1 00:47:55.389 --rc genhtml_function_coverage=1 00:47:55.389 --rc genhtml_legend=1 00:47:55.389 --rc geninfo_all_blocks=1 00:47:55.389 --rc geninfo_unexecuted_blocks=1 00:47:55.389 00:47:55.389 ' 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:55.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:55.389 --rc genhtml_branch_coverage=1 00:47:55.389 --rc genhtml_function_coverage=1 00:47:55.389 --rc genhtml_legend=1 00:47:55.389 --rc geninfo_all_blocks=1 00:47:55.389 --rc geninfo_unexecuted_blocks=1 00:47:55.389 00:47:55.389 ' 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:55.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:55.389 --rc genhtml_branch_coverage=1 00:47:55.389 --rc genhtml_function_coverage=1 00:47:55.389 --rc genhtml_legend=1 00:47:55.389 --rc geninfo_all_blocks=1 00:47:55.389 --rc geninfo_unexecuted_blocks=1 00:47:55.389 00:47:55.389 ' 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:55.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:55.389 --rc genhtml_branch_coverage=1 00:47:55.389 --rc genhtml_function_coverage=1 00:47:55.389 --rc genhtml_legend=1 00:47:55.389 --rc geninfo_all_blocks=1 00:47:55.389 --rc geninfo_unexecuted_blocks=1 00:47:55.389 00:47:55.389 ' 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:55.389 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:47:55.389 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:55.390 Cannot find device "nvmf_init_br" 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:55.390 Cannot find device "nvmf_init_br2" 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:55.390 Cannot find device "nvmf_tgt_br" 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:55.390 Cannot find device "nvmf_tgt_br2" 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:55.390 Cannot find device "nvmf_init_br" 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:55.390 Cannot find device "nvmf_init_br2" 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:55.390 Cannot find device "nvmf_tgt_br" 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:55.390 Cannot find device "nvmf_tgt_br2" 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:55.390 Cannot find device "nvmf_br" 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:55.390 Cannot find device "nvmf_init_if" 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:47:55.390 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:55.650 Cannot find device "nvmf_init_if2" 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:55.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:55.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:55.650 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:55.650 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:47:55.650 00:47:55.650 --- 10.0.0.3 ping statistics --- 00:47:55.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:55.650 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:55.650 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:55.650 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 00:47:55.650 00:47:55.650 --- 10.0.0.4 ping statistics --- 00:47:55.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:55.650 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:55.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:55.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:47:55.650 00:47:55.650 --- 10.0.0.1 ping statistics --- 00:47:55.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:55.650 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:55.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:55.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:47:55.650 00:47:55.650 --- 10.0.0.2 ping statistics --- 00:47:55.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:55.650 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:55.650 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=75032 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 75032 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 75032 ']' 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:55.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:55.911 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:47:55.911 [2024-12-09 09:59:02.800255] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:47:55.911 [2024-12-09 09:59:02.800359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:56.168 [2024-12-09 09:59:02.956987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:56.168 [2024-12-09 09:59:03.015805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:56.168 [2024-12-09 09:59:03.015870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:56.168 [2024-12-09 09:59:03.015877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:56.168 [2024-12-09 09:59:03.015883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:56.168 [2024-12-09 09:59:03.015888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:56.168 [2024-12-09 09:59:03.016752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:56.168 [2024-12-09 09:59:03.016842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:56.168 [2024-12-09 09:59:03.016941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:56.168 [2024-12-09 09:59:03.016943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:56.731 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:56.731 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:47:56.731 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:56.731 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:56.731 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:47:56.987 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:56.987 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:47:56.987 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10713 00:47:57.244 [2024-12-09 09:59:04.049424] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:47:57.244 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/09 09:59:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10713 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:47:57.244 request: 00:47:57.244 { 00:47:57.244 "method": "nvmf_create_subsystem", 00:47:57.244 "params": { 00:47:57.244 "nqn": "nqn.2016-06.io.spdk:cnode10713", 00:47:57.244 "tgt_name": "foobar" 00:47:57.244 } 00:47:57.244 } 00:47:57.244 Got JSON-RPC error response 00:47:57.244 GoRPCClient: error on JSON-RPC call' 00:47:57.244 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/09 09:59:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10713 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:47:57.244 request: 00:47:57.244 { 00:47:57.244 "method": "nvmf_create_subsystem", 00:47:57.244 "params": { 00:47:57.244 "nqn": "nqn.2016-06.io.spdk:cnode10713", 00:47:57.244 "tgt_name": "foobar" 00:47:57.244 } 00:47:57.244 } 00:47:57.244 Got JSON-RPC error response 00:47:57.244 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:47:57.244 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:47:57.244 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28315 00:47:57.501 [2024-12-09 09:59:04.341199] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28315: invalid serial number 'SPDKISFASTANDAWESOME' 00:47:57.501 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/09 09:59:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28315 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:47:57.501 request: 00:47:57.501 { 00:47:57.501 "method": "nvmf_create_subsystem", 00:47:57.501 "params": { 00:47:57.501 "nqn": "nqn.2016-06.io.spdk:cnode28315", 00:47:57.501 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:47:57.501 } 00:47:57.501 } 00:47:57.501 Got JSON-RPC error response 00:47:57.501 GoRPCClient: error on JSON-RPC call' 00:47:57.501 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/09 09:59:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28315 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:47:57.501 request: 00:47:57.501 { 00:47:57.501 "method": "nvmf_create_subsystem", 00:47:57.501 "params": { 00:47:57.501 "nqn": "nqn.2016-06.io.spdk:cnode28315", 00:47:57.501 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:47:57.501 } 00:47:57.501 } 00:47:57.501 Got JSON-RPC error response 00:47:57.501 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:47:57.501 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:47:57.501 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12709 00:47:57.762 [2024-12-09 09:59:04.656905] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12709: invalid model number 'SPDK_Controller' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/09 09:59:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode12709], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:47:57.762 request: 00:47:57.762 { 00:47:57.762 "method": "nvmf_create_subsystem", 00:47:57.762 "params": { 00:47:57.762 "nqn": "nqn.2016-06.io.spdk:cnode12709", 00:47:57.762 "model_number": "SPDK_Controller\u001f" 00:47:57.762 } 00:47:57.762 } 00:47:57.762 Got JSON-RPC error response 00:47:57.762 GoRPCClient: error on JSON-RPC call' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/09 09:59:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode12709], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:47:57.762 request: 00:47:57.762 { 00:47:57.762 "method": "nvmf_create_subsystem", 00:47:57.762 "params": { 00:47:57.762 "nqn": "nqn.2016-06.io.spdk:cnode12709", 00:47:57.762 "model_number": "SPDK_Controller\u001f" 00:47:57.762 } 00:47:57.762 } 00:47:57.762 Got JSON-RPC error response 00:47:57.762 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:47:57.762 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:47:57.763 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:47:58.053 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:47:58.054 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.054 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.054 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:47:58.054 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:47:58.054 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:47:58.054 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.054 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.054 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ } == \- ]] 00:47:58.054 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '}cZcU2gfkEd#5cd`Ey3`' 00:47:58.054 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '}cZcU2gfkEd#5cd`Ey3`' nqn.2016-06.io.spdk:cnode3091 00:47:58.054 [2024-12-09 09:59:05.064563] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3091: invalid serial number '}cZcU2gfkEd#5cd`Ey3`' 00:47:58.054 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/09 09:59:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3091 serial_number:}cZcU2gfkEd#5cd`Ey3`], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN }cZcU2gfkEd#5cd`Ey3` 00:47:58.054 request: 00:47:58.054 { 00:47:58.054 "method": "nvmf_create_subsystem", 00:47:58.054 "params": { 00:47:58.054 "nqn": "nqn.2016-06.io.spdk:cnode3091", 00:47:58.054 "serial_number": "}cZ\u007fcU2gfkEd#5cd`Ey3`" 00:47:58.054 } 00:47:58.054 } 00:47:58.054 Got JSON-RPC error response 00:47:58.054 GoRPCClient: error on JSON-RPC call' 00:47:58.054 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/09 09:59:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3091 serial_number:}cZcU2gfkEd#5cd`Ey3`], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN }cZcU2gfkEd#5cd`Ey3` 00:47:58.054 request: 00:47:58.054 { 00:47:58.054 "method": "nvmf_create_subsystem", 00:47:58.054 "params": { 00:47:58.054 "nqn": "nqn.2016-06.io.spdk:cnode3091", 00:47:58.054 "serial_number": "}cZ\u007fcU2gfkEd#5cd`Ey3`" 00:47:58.054 } 00:47:58.054 } 00:47:58.054 Got JSON-RPC error response 00:47:58.054 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:47:58.054 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:47:58.054 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:47:58.054 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:47:58.054 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:47:58.054 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:47:58.054 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:47:58.054 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.311 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.312 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.313 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ | == \- ]] 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '|YSYH:ty`i#2rE~'\''F+pQjG4@Y^go.a@pgenHncC81' 00:47:58.569 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '|YSYH:ty`i#2rE~'\''F+pQjG4@Y^go.a@pgenHncC81' nqn.2016-06.io.spdk:cnode17354 00:47:58.827 [2024-12-09 09:59:05.620012] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17354: invalid model number '|YSYH:ty`i#2rE~'F+pQjG4@Y^go.a@pgenHncC81' 00:47:58.827 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/12/09 09:59:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:|YSYH:ty`i#2rE~'\''F+pQjG4@Y^go.a@pgenHncC81 nqn:nqn.2016-06.io.spdk:cnode17354], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN |YSYH:ty`i#2rE~'\''F+pQjG4@Y^go.a@pgenHncC81 00:47:58.827 request: 00:47:58.827 { 00:47:58.827 "method": "nvmf_create_subsystem", 00:47:58.827 "params": { 00:47:58.827 "nqn": "nqn.2016-06.io.spdk:cnode17354", 00:47:58.827 "model_number": "|YSYH:ty`i#2rE~'\''F+pQjG4@Y^go.a@pgenHncC81" 00:47:58.827 } 00:47:58.827 } 00:47:58.827 Got JSON-RPC error response 00:47:58.827 GoRPCClient: error on JSON-RPC call' 00:47:58.827 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/12/09 09:59:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:|YSYH:ty`i#2rE~'F+pQjG4@Y^go.a@pgenHncC81 nqn:nqn.2016-06.io.spdk:cnode17354], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN |YSYH:ty`i#2rE~'F+pQjG4@Y^go.a@pgenHncC81 00:47:58.827 request: 00:47:58.827 { 00:47:58.827 "method": "nvmf_create_subsystem", 00:47:58.827 "params": { 00:47:58.827 "nqn": "nqn.2016-06.io.spdk:cnode17354", 00:47:58.827 "model_number": "|YSYH:ty`i#2rE~'F+pQjG4@Y^go.a@pgenHncC81" 00:47:58.827 } 00:47:58.827 } 00:47:58.827 Got JSON-RPC error response 00:47:58.827 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:47:58.827 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:47:59.085 [2024-12-09 09:59:05.935747] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:59.085 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:47:59.343 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:47:59.343 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:47:59.343 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:47:59.343 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:47:59.343 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:47:59.602 [2024-12-09 09:59:06.507505] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:47:59.602 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/12/09 09:59:06 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:47:59.602 request: 00:47:59.602 { 00:47:59.602 "method": "nvmf_subsystem_remove_listener", 00:47:59.602 "params": { 00:47:59.602 "nqn": "nqn.2016-06.io.spdk:cnode", 00:47:59.602 "listen_address": { 00:47:59.602 "trtype": "tcp", 00:47:59.602 "traddr": "", 00:47:59.602 "trsvcid": "4421" 00:47:59.602 } 00:47:59.602 } 00:47:59.602 } 00:47:59.602 Got JSON-RPC error response 00:47:59.602 GoRPCClient: error on JSON-RPC call' 00:47:59.602 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/12/09 09:59:06 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:47:59.602 request: 00:47:59.602 { 00:47:59.602 "method": "nvmf_subsystem_remove_listener", 00:47:59.602 "params": { 00:47:59.602 "nqn": "nqn.2016-06.io.spdk:cnode", 00:47:59.602 "listen_address": { 00:47:59.602 "trtype": "tcp", 00:47:59.602 "traddr": "", 00:47:59.602 "trsvcid": "4421" 00:47:59.602 } 00:47:59.602 } 00:47:59.602 } 00:47:59.602 Got JSON-RPC error response 00:47:59.602 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:47:59.602 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21406 -i 0 00:47:59.861 [2024-12-09 09:59:06.783183] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21406: invalid cntlid range [0-65519] 00:47:59.861 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/12/09 09:59:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode21406], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:47:59.861 request: 00:47:59.861 { 00:47:59.861 "method": "nvmf_create_subsystem", 00:47:59.861 "params": { 00:47:59.861 "nqn": "nqn.2016-06.io.spdk:cnode21406", 00:47:59.861 "min_cntlid": 0 00:47:59.861 } 00:47:59.861 } 00:47:59.861 Got JSON-RPC error response 00:47:59.861 GoRPCClient: error on JSON-RPC call' 00:47:59.861 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/12/09 09:59:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode21406], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:47:59.861 request: 00:47:59.861 { 00:47:59.861 "method": "nvmf_create_subsystem", 00:47:59.861 "params": { 00:47:59.861 "nqn": "nqn.2016-06.io.spdk:cnode21406", 00:47:59.861 "min_cntlid": 0 00:47:59.861 } 00:47:59.861 } 00:47:59.861 Got JSON-RPC error response 00:47:59.861 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:47:59.861 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9077 -i 65520 00:48:00.120 [2024-12-09 09:59:07.038908] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9077: invalid cntlid range [65520-65519] 00:48:00.120 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/12/09 09:59:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode9077], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:48:00.120 request: 00:48:00.120 { 00:48:00.120 "method": "nvmf_create_subsystem", 00:48:00.120 "params": { 00:48:00.120 "nqn": "nqn.2016-06.io.spdk:cnode9077", 00:48:00.120 "min_cntlid": 65520 00:48:00.120 } 00:48:00.120 } 00:48:00.120 Got JSON-RPC error response 00:48:00.120 GoRPCClient: error on JSON-RPC call' 00:48:00.120 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/12/09 09:59:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode9077], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:48:00.120 request: 00:48:00.120 { 00:48:00.120 "method": "nvmf_create_subsystem", 00:48:00.120 "params": { 00:48:00.120 "nqn": "nqn.2016-06.io.spdk:cnode9077", 00:48:00.120 "min_cntlid": 65520 00:48:00.120 } 00:48:00.120 } 00:48:00.120 Got JSON-RPC error response 00:48:00.120 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:48:00.120 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13466 -I 0 00:48:00.379 [2024-12-09 09:59:07.302691] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13466: invalid cntlid range [1-0] 00:48:00.379 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/12/09 09:59:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode13466], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:48:00.379 request: 00:48:00.379 { 00:48:00.379 "method": "nvmf_create_subsystem", 00:48:00.379 "params": { 00:48:00.380 "nqn": "nqn.2016-06.io.spdk:cnode13466", 00:48:00.380 "max_cntlid": 0 00:48:00.380 } 00:48:00.380 } 00:48:00.380 Got JSON-RPC error response 00:48:00.380 GoRPCClient: error on JSON-RPC call' 00:48:00.380 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/12/09 09:59:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode13466], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:48:00.380 request: 00:48:00.380 { 00:48:00.380 "method": "nvmf_create_subsystem", 00:48:00.380 "params": { 00:48:00.380 "nqn": "nqn.2016-06.io.spdk:cnode13466", 00:48:00.380 "max_cntlid": 0 00:48:00.380 } 00:48:00.380 } 00:48:00.380 Got JSON-RPC error response 00:48:00.380 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:48:00.380 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9115 -I 65520 00:48:00.639 [2024-12-09 09:59:07.570451] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9115: invalid cntlid range [1-65520] 00:48:00.639 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/12/09 09:59:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode9115], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:48:00.639 request: 00:48:00.639 { 00:48:00.639 "method": "nvmf_create_subsystem", 00:48:00.639 "params": { 00:48:00.639 "nqn": "nqn.2016-06.io.spdk:cnode9115", 00:48:00.639 "max_cntlid": 65520 00:48:00.639 } 00:48:00.639 } 00:48:00.639 Got JSON-RPC error response 00:48:00.639 GoRPCClient: error on JSON-RPC call' 00:48:00.639 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/12/09 09:59:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode9115], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:48:00.639 request: 00:48:00.639 { 00:48:00.639 "method": "nvmf_create_subsystem", 00:48:00.639 "params": { 00:48:00.639 "nqn": "nqn.2016-06.io.spdk:cnode9115", 00:48:00.639 "max_cntlid": 65520 00:48:00.639 } 00:48:00.639 } 00:48:00.639 Got JSON-RPC error response 00:48:00.639 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:48:00.639 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31965 -i 6 -I 5 00:48:00.898 [2024-12-09 09:59:07.854174] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31965: invalid cntlid range [6-5] 00:48:00.898 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/12/09 09:59:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode31965], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:48:00.898 request: 00:48:00.898 { 00:48:00.898 "method": "nvmf_create_subsystem", 00:48:00.898 "params": { 00:48:00.898 "nqn": "nqn.2016-06.io.spdk:cnode31965", 00:48:00.898 "min_cntlid": 6, 00:48:00.898 "max_cntlid": 5 00:48:00.898 } 00:48:00.898 } 00:48:00.898 Got JSON-RPC error response 00:48:00.898 GoRPCClient: error on JSON-RPC call' 00:48:00.898 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/12/09 09:59:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode31965], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:48:00.898 request: 00:48:00.898 { 00:48:00.898 "method": "nvmf_create_subsystem", 00:48:00.898 "params": { 00:48:00.898 "nqn": "nqn.2016-06.io.spdk:cnode31965", 00:48:00.898 "min_cntlid": 6, 00:48:00.898 "max_cntlid": 5 00:48:00.898 } 00:48:00.898 } 00:48:00.898 Got JSON-RPC error response 00:48:00.898 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:48:00.898 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:48:01.155 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:48:01.155 { 00:48:01.155 "name": "foobar", 00:48:01.155 "method": "nvmf_delete_target", 00:48:01.155 "req_id": 1 00:48:01.155 } 00:48:01.155 Got JSON-RPC error response 00:48:01.155 response: 00:48:01.155 { 00:48:01.155 "code": -32602, 00:48:01.155 "message": "The specified target doesn'\''t exist, cannot delete it." 00:48:01.155 }' 00:48:01.155 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:48:01.155 { 00:48:01.155 "name": "foobar", 00:48:01.155 "method": "nvmf_delete_target", 00:48:01.155 "req_id": 1 00:48:01.155 } 00:48:01.155 Got JSON-RPC error response 00:48:01.156 response: 00:48:01.156 { 00:48:01.156 "code": -32602, 00:48:01.156 "message": "The specified target doesn't exist, cannot delete it." 00:48:01.156 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:01.156 rmmod nvme_tcp 00:48:01.156 rmmod nvme_fabrics 00:48:01.156 rmmod nvme_keyring 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 75032 ']' 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 75032 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 75032 ']' 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 75032 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75032 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:01.156 killing process with pid 75032 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75032' 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 75032 00:48:01.156 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 75032 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:48:01.414 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:48:01.673 00:48:01.673 real 0m6.622s 00:48:01.673 user 0m24.962s 00:48:01.673 sys 0m1.626s 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:48:01.673 ************************************ 00:48:01.673 END TEST nvmf_invalid 00:48:01.673 ************************************ 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:48:01.673 ************************************ 00:48:01.673 START TEST nvmf_connect_stress 00:48:01.673 ************************************ 00:48:01.673 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:48:01.932 * Looking for test storage... 00:48:01.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:48:01.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:01.932 --rc genhtml_branch_coverage=1 00:48:01.932 --rc genhtml_function_coverage=1 00:48:01.932 --rc genhtml_legend=1 00:48:01.932 --rc geninfo_all_blocks=1 00:48:01.932 --rc geninfo_unexecuted_blocks=1 00:48:01.932 00:48:01.932 ' 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:48:01.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:01.932 --rc genhtml_branch_coverage=1 00:48:01.932 --rc genhtml_function_coverage=1 00:48:01.932 --rc genhtml_legend=1 00:48:01.932 --rc geninfo_all_blocks=1 00:48:01.932 --rc geninfo_unexecuted_blocks=1 00:48:01.932 00:48:01.932 ' 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:48:01.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:01.932 --rc genhtml_branch_coverage=1 00:48:01.932 --rc genhtml_function_coverage=1 00:48:01.932 --rc genhtml_legend=1 00:48:01.932 --rc geninfo_all_blocks=1 00:48:01.932 --rc geninfo_unexecuted_blocks=1 00:48:01.932 00:48:01.932 ' 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:48:01.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:01.932 --rc genhtml_branch_coverage=1 00:48:01.932 --rc genhtml_function_coverage=1 00:48:01.932 --rc genhtml_legend=1 00:48:01.932 --rc geninfo_all_blocks=1 00:48:01.932 --rc geninfo_unexecuted_blocks=1 00:48:01.932 00:48:01.932 ' 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:01.932 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:01.933 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:48:01.933 Cannot find device "nvmf_init_br" 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:48:01.933 Cannot find device "nvmf_init_br2" 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:48:01.933 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:48:02.191 Cannot find device "nvmf_tgt_br" 00:48:02.191 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:48:02.191 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:48:02.191 Cannot find device "nvmf_tgt_br2" 00:48:02.191 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:48:02.191 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:48:02.191 Cannot find device "nvmf_init_br" 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:48:02.191 Cannot find device "nvmf_init_br2" 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:48:02.191 Cannot find device "nvmf_tgt_br" 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:48:02.191 Cannot find device "nvmf_tgt_br2" 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:48:02.191 Cannot find device "nvmf_br" 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:48:02.191 Cannot find device "nvmf_init_if" 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:48:02.191 Cannot find device "nvmf_init_if2" 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:02.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:02.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:48:02.191 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:48:02.449 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:02.449 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:48:02.449 00:48:02.449 --- 10.0.0.3 ping statistics --- 00:48:02.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:02.449 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:48:02.449 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:48:02.449 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 00:48:02.449 00:48:02.449 --- 10.0.0.4 ping statistics --- 00:48:02.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:02.449 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:02.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:02.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:48:02.449 00:48:02.449 --- 10.0.0.1 ping statistics --- 00:48:02.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:02.449 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:48:02.449 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:48:02.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:02.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:48:02.449 00:48:02.449 --- 10.0.0.2 ping statistics --- 00:48:02.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:02.450 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=75594 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 75594 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 75594 ']' 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:02.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:02.450 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:02.450 [2024-12-09 09:59:09.437553] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:48:02.450 [2024-12-09 09:59:09.437637] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:02.715 [2024-12-09 09:59:09.572738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:48:02.715 [2024-12-09 09:59:09.643981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:02.715 [2024-12-09 09:59:09.644073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:02.715 [2024-12-09 09:59:09.644087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:02.715 [2024-12-09 09:59:09.644097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:02.715 [2024-12-09 09:59:09.644104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:02.715 [2024-12-09 09:59:09.645287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:02.715 [2024-12-09 09:59:09.645340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:48:02.715 [2024-12-09 09:59:09.645345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:03.650 [2024-12-09 09:59:10.503169] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:03.650 [2024-12-09 09:59:10.520706] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:03.650 NULL1 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75647 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.650 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.651 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:03.908 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:03.908 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:03.908 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:03.908 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:03.908 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:04.474 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.474 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:04.474 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:04.474 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.474 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:04.732 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.732 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:04.732 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:04.732 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.732 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:04.989 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:04.989 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:04.989 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:04.989 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:04.989 09:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:05.247 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.247 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:05.247 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:05.247 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.247 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:05.811 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:05.811 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:05.811 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:05.811 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:05.811 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:06.070 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.070 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:06.070 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:06.070 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.070 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:06.329 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.329 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:06.329 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:06.329 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.329 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:06.587 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:06.587 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:06.587 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:06.587 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:06.587 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:07.151 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:07.151 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:07.151 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:07.152 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:07.152 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:07.409 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:07.409 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:07.409 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:07.409 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:07.409 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:07.668 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:07.668 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:07.668 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:07.668 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:07.668 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:07.927 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:07.927 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:07.927 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:07.927 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:07.927 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:08.187 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:08.187 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:08.187 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:08.187 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:08.187 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:08.755 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:08.755 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:08.755 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:08.755 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:08.755 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:09.014 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:09.014 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:09.014 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:09.014 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:09.014 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:09.273 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:09.273 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:09.273 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:09.273 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:09.273 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:09.532 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:09.532 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:09.532 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:09.532 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:09.532 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:09.790 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:09.790 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:09.790 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:09.790 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:09.790 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:10.355 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:10.355 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:10.355 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:10.355 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:10.355 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:10.613 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:10.613 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:10.613 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:10.613 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:10.613 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:10.872 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:10.872 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:10.872 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:10.872 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:10.872 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:11.132 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:11.132 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:11.132 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:11.132 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:11.132 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:11.700 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:11.701 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:11.701 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:11.701 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:11.701 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:11.959 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:11.959 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:11.959 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:11.959 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:11.959 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:12.217 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:12.217 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:12.217 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:12.217 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:12.217 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:12.476 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:12.476 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:12.476 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:12.476 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:12.477 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:12.734 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:12.734 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:12.734 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:12.734 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:12.734 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:13.305 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:13.305 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:13.305 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:13.305 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:13.305 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:13.563 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:13.563 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:13.563 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:13.563 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:13.563 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:13.823 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:13.823 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:13.823 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:48:13.823 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:13.823 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:13.823 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:48:14.082 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:14.082 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75647 00:48:14.082 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75647) - No such process 00:48:14.082 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75647 00:48:14.082 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:48:14.082 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:48:14.082 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:48:14.082 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:14.082 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:48:14.082 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:14.082 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:48:14.082 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:14.082 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:14.082 rmmod nvme_tcp 00:48:14.342 rmmod nvme_fabrics 00:48:14.342 rmmod nvme_keyring 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 75594 ']' 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 75594 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 75594 ']' 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 75594 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75594 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:14.342 killing process with pid 75594 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75594' 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 75594 00:48:14.342 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 75594 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:14.602 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:14.862 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:48:14.862 00:48:14.862 real 0m13.007s 00:48:14.862 user 0m42.282s 00:48:14.862 sys 0m3.312s 00:48:14.862 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:14.862 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:48:14.862 ************************************ 00:48:14.862 END TEST nvmf_connect_stress 00:48:14.862 ************************************ 00:48:14.862 09:59:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:48:14.862 09:59:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:48:14.862 09:59:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:14.863 09:59:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:48:14.863 ************************************ 00:48:14.863 START TEST nvmf_fused_ordering 00:48:14.863 ************************************ 00:48:14.863 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:48:14.863 * Looking for test storage... 00:48:14.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:48:14.863 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:48:14.863 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:48:14.863 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:48:15.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:15.123 --rc genhtml_branch_coverage=1 00:48:15.123 --rc genhtml_function_coverage=1 00:48:15.123 --rc genhtml_legend=1 00:48:15.123 --rc geninfo_all_blocks=1 00:48:15.123 --rc geninfo_unexecuted_blocks=1 00:48:15.123 00:48:15.123 ' 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:48:15.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:15.123 --rc genhtml_branch_coverage=1 00:48:15.123 --rc genhtml_function_coverage=1 00:48:15.123 --rc genhtml_legend=1 00:48:15.123 --rc geninfo_all_blocks=1 00:48:15.123 --rc geninfo_unexecuted_blocks=1 00:48:15.123 00:48:15.123 ' 00:48:15.123 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:48:15.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:15.123 --rc genhtml_branch_coverage=1 00:48:15.124 --rc genhtml_function_coverage=1 00:48:15.124 --rc genhtml_legend=1 00:48:15.124 --rc geninfo_all_blocks=1 00:48:15.124 --rc geninfo_unexecuted_blocks=1 00:48:15.124 00:48:15.124 ' 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:48:15.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:15.124 --rc genhtml_branch_coverage=1 00:48:15.124 --rc genhtml_function_coverage=1 00:48:15.124 --rc genhtml_legend=1 00:48:15.124 --rc geninfo_all_blocks=1 00:48:15.124 --rc geninfo_unexecuted_blocks=1 00:48:15.124 00:48:15.124 ' 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:15.124 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:15.124 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:48:15.124 Cannot find device "nvmf_init_br" 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:48:15.124 Cannot find device "nvmf_init_br2" 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:48:15.124 Cannot find device "nvmf_tgt_br" 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:48:15.124 Cannot find device "nvmf_tgt_br2" 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:48:15.124 Cannot find device "nvmf_init_br" 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:48:15.124 Cannot find device "nvmf_init_br2" 00:48:15.124 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:48:15.125 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:48:15.125 Cannot find device "nvmf_tgt_br" 00:48:15.125 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:48:15.125 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:48:15.125 Cannot find device "nvmf_tgt_br2" 00:48:15.125 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:48:15.125 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:48:15.384 Cannot find device "nvmf_br" 00:48:15.384 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:48:15.384 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:48:15.384 Cannot find device "nvmf_init_if" 00:48:15.384 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:48:15.384 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:48:15.384 Cannot find device "nvmf_init_if2" 00:48:15.384 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:48:15.384 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:15.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:15.384 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:48:15.384 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:15.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:15.385 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:48:15.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:15.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.147 ms 00:48:15.645 00:48:15.645 --- 10.0.0.3 ping statistics --- 00:48:15.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:15.645 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:48:15.645 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:48:15.645 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.123 ms 00:48:15.645 00:48:15.645 --- 10.0.0.4 ping statistics --- 00:48:15.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:15.645 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:15.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:15.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:48:15.645 00:48:15.645 --- 10.0.0.1 ping statistics --- 00:48:15.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:15.645 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:48:15.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:15.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:48:15.645 00:48:15.645 --- 10.0.0.2 ping statistics --- 00:48:15.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:15.645 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=76037 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 76037 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 76037 ']' 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:15.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:15.645 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:48:15.645 [2024-12-09 09:59:22.576595] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:48:15.645 [2024-12-09 09:59:22.576678] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:15.904 [2024-12-09 09:59:22.710780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:15.904 [2024-12-09 09:59:22.778300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:15.904 [2024-12-09 09:59:22.778378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:15.904 [2024-12-09 09:59:22.778390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:15.904 [2024-12-09 09:59:22.778400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:15.904 [2024-12-09 09:59:22.778409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:15.904 [2024-12-09 09:59:22.778810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:48:16.840 [2024-12-09 09:59:23.627560] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:48:16.840 [2024-12-09 09:59:23.651643] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:48:16.840 NULL1 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:16.840 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:48:16.841 [2024-12-09 09:59:23.727139] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:48:16.841 [2024-12-09 09:59:23.727198] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76087 ] 00:48:17.410 Attached to nqn.2016-06.io.spdk:cnode1 00:48:17.410 Namespace ID: 1 size: 1GB 00:48:17.410 fused_ordering(0) 00:48:17.410 fused_ordering(1) 00:48:17.410 fused_ordering(2) 00:48:17.410 fused_ordering(3) 00:48:17.410 fused_ordering(4) 00:48:17.410 fused_ordering(5) 00:48:17.410 fused_ordering(6) 00:48:17.410 fused_ordering(7) 00:48:17.410 fused_ordering(8) 00:48:17.410 fused_ordering(9) 00:48:17.410 fused_ordering(10) 00:48:17.410 fused_ordering(11) 00:48:17.410 fused_ordering(12) 00:48:17.410 fused_ordering(13) 00:48:17.410 fused_ordering(14) 00:48:17.410 fused_ordering(15) 00:48:17.410 fused_ordering(16) 00:48:17.410 fused_ordering(17) 00:48:17.410 fused_ordering(18) 00:48:17.410 fused_ordering(19) 00:48:17.410 fused_ordering(20) 00:48:17.410 fused_ordering(21) 00:48:17.410 fused_ordering(22) 00:48:17.410 fused_ordering(23) 00:48:17.410 fused_ordering(24) 00:48:17.410 fused_ordering(25) 00:48:17.410 fused_ordering(26) 00:48:17.410 fused_ordering(27) 00:48:17.410 fused_ordering(28) 00:48:17.410 fused_ordering(29) 00:48:17.410 fused_ordering(30) 00:48:17.410 fused_ordering(31) 00:48:17.410 fused_ordering(32) 00:48:17.410 fused_ordering(33) 00:48:17.410 fused_ordering(34) 00:48:17.410 fused_ordering(35) 00:48:17.410 fused_ordering(36) 00:48:17.410 fused_ordering(37) 00:48:17.410 fused_ordering(38) 00:48:17.410 fused_ordering(39) 00:48:17.410 fused_ordering(40) 00:48:17.410 fused_ordering(41) 00:48:17.410 fused_ordering(42) 00:48:17.410 fused_ordering(43) 00:48:17.410 fused_ordering(44) 00:48:17.410 fused_ordering(45) 00:48:17.410 fused_ordering(46) 00:48:17.410 fused_ordering(47) 00:48:17.410 fused_ordering(48) 00:48:17.410 fused_ordering(49) 00:48:17.410 fused_ordering(50) 00:48:17.410 fused_ordering(51) 00:48:17.410 fused_ordering(52) 00:48:17.410 fused_ordering(53) 00:48:17.410 fused_ordering(54) 00:48:17.410 fused_ordering(55) 00:48:17.410 fused_ordering(56) 00:48:17.410 fused_ordering(57) 00:48:17.410 fused_ordering(58) 00:48:17.410 fused_ordering(59) 00:48:17.410 fused_ordering(60) 00:48:17.410 fused_ordering(61) 00:48:17.410 fused_ordering(62) 00:48:17.410 fused_ordering(63) 00:48:17.410 fused_ordering(64) 00:48:17.410 fused_ordering(65) 00:48:17.410 fused_ordering(66) 00:48:17.410 fused_ordering(67) 00:48:17.410 fused_ordering(68) 00:48:17.410 fused_ordering(69) 00:48:17.410 fused_ordering(70) 00:48:17.410 fused_ordering(71) 00:48:17.410 fused_ordering(72) 00:48:17.410 fused_ordering(73) 00:48:17.410 fused_ordering(74) 00:48:17.410 fused_ordering(75) 00:48:17.410 fused_ordering(76) 00:48:17.410 fused_ordering(77) 00:48:17.410 fused_ordering(78) 00:48:17.410 fused_ordering(79) 00:48:17.410 fused_ordering(80) 00:48:17.410 fused_ordering(81) 00:48:17.410 fused_ordering(82) 00:48:17.410 fused_ordering(83) 00:48:17.410 fused_ordering(84) 00:48:17.410 fused_ordering(85) 00:48:17.410 fused_ordering(86) 00:48:17.410 fused_ordering(87) 00:48:17.410 fused_ordering(88) 00:48:17.410 fused_ordering(89) 00:48:17.410 fused_ordering(90) 00:48:17.410 fused_ordering(91) 00:48:17.410 fused_ordering(92) 00:48:17.410 fused_ordering(93) 00:48:17.410 fused_ordering(94) 00:48:17.410 fused_ordering(95) 00:48:17.410 fused_ordering(96) 00:48:17.410 fused_ordering(97) 00:48:17.410 fused_ordering(98) 00:48:17.410 fused_ordering(99) 00:48:17.410 fused_ordering(100) 00:48:17.410 fused_ordering(101) 00:48:17.410 fused_ordering(102) 00:48:17.410 fused_ordering(103) 00:48:17.410 fused_ordering(104) 00:48:17.410 fused_ordering(105) 00:48:17.410 fused_ordering(106) 00:48:17.410 fused_ordering(107) 00:48:17.410 fused_ordering(108) 00:48:17.410 fused_ordering(109) 00:48:17.410 fused_ordering(110) 00:48:17.410 fused_ordering(111) 00:48:17.410 fused_ordering(112) 00:48:17.410 fused_ordering(113) 00:48:17.410 fused_ordering(114) 00:48:17.410 fused_ordering(115) 00:48:17.410 fused_ordering(116) 00:48:17.410 fused_ordering(117) 00:48:17.410 fused_ordering(118) 00:48:17.410 fused_ordering(119) 00:48:17.410 fused_ordering(120) 00:48:17.410 fused_ordering(121) 00:48:17.410 fused_ordering(122) 00:48:17.410 fused_ordering(123) 00:48:17.410 fused_ordering(124) 00:48:17.410 fused_ordering(125) 00:48:17.410 fused_ordering(126) 00:48:17.410 fused_ordering(127) 00:48:17.410 fused_ordering(128) 00:48:17.410 fused_ordering(129) 00:48:17.410 fused_ordering(130) 00:48:17.410 fused_ordering(131) 00:48:17.410 fused_ordering(132) 00:48:17.410 fused_ordering(133) 00:48:17.410 fused_ordering(134) 00:48:17.410 fused_ordering(135) 00:48:17.410 fused_ordering(136) 00:48:17.411 fused_ordering(137) 00:48:17.411 fused_ordering(138) 00:48:17.411 fused_ordering(139) 00:48:17.411 fused_ordering(140) 00:48:17.411 fused_ordering(141) 00:48:17.411 fused_ordering(142) 00:48:17.411 fused_ordering(143) 00:48:17.411 fused_ordering(144) 00:48:17.411 fused_ordering(145) 00:48:17.411 fused_ordering(146) 00:48:17.411 fused_ordering(147) 00:48:17.411 fused_ordering(148) 00:48:17.411 fused_ordering(149) 00:48:17.411 fused_ordering(150) 00:48:17.411 fused_ordering(151) 00:48:17.411 fused_ordering(152) 00:48:17.411 fused_ordering(153) 00:48:17.411 fused_ordering(154) 00:48:17.411 fused_ordering(155) 00:48:17.411 fused_ordering(156) 00:48:17.411 fused_ordering(157) 00:48:17.411 fused_ordering(158) 00:48:17.411 fused_ordering(159) 00:48:17.411 fused_ordering(160) 00:48:17.411 fused_ordering(161) 00:48:17.411 fused_ordering(162) 00:48:17.411 fused_ordering(163) 00:48:17.411 fused_ordering(164) 00:48:17.411 fused_ordering(165) 00:48:17.411 fused_ordering(166) 00:48:17.411 fused_ordering(167) 00:48:17.411 fused_ordering(168) 00:48:17.411 fused_ordering(169) 00:48:17.411 fused_ordering(170) 00:48:17.411 fused_ordering(171) 00:48:17.411 fused_ordering(172) 00:48:17.411 fused_ordering(173) 00:48:17.411 fused_ordering(174) 00:48:17.411 fused_ordering(175) 00:48:17.411 fused_ordering(176) 00:48:17.411 fused_ordering(177) 00:48:17.411 fused_ordering(178) 00:48:17.411 fused_ordering(179) 00:48:17.411 fused_ordering(180) 00:48:17.411 fused_ordering(181) 00:48:17.411 fused_ordering(182) 00:48:17.411 fused_ordering(183) 00:48:17.411 fused_ordering(184) 00:48:17.411 fused_ordering(185) 00:48:17.411 fused_ordering(186) 00:48:17.411 fused_ordering(187) 00:48:17.411 fused_ordering(188) 00:48:17.411 fused_ordering(189) 00:48:17.411 fused_ordering(190) 00:48:17.411 fused_ordering(191) 00:48:17.411 fused_ordering(192) 00:48:17.411 fused_ordering(193) 00:48:17.411 fused_ordering(194) 00:48:17.411 fused_ordering(195) 00:48:17.411 fused_ordering(196) 00:48:17.411 fused_ordering(197) 00:48:17.411 fused_ordering(198) 00:48:17.411 fused_ordering(199) 00:48:17.411 fused_ordering(200) 00:48:17.411 fused_ordering(201) 00:48:17.411 fused_ordering(202) 00:48:17.411 fused_ordering(203) 00:48:17.411 fused_ordering(204) 00:48:17.411 fused_ordering(205) 00:48:17.411 fused_ordering(206) 00:48:17.411 fused_ordering(207) 00:48:17.411 fused_ordering(208) 00:48:17.411 fused_ordering(209) 00:48:17.411 fused_ordering(210) 00:48:17.411 fused_ordering(211) 00:48:17.411 fused_ordering(212) 00:48:17.411 fused_ordering(213) 00:48:17.411 fused_ordering(214) 00:48:17.411 fused_ordering(215) 00:48:17.411 fused_ordering(216) 00:48:17.411 fused_ordering(217) 00:48:17.411 fused_ordering(218) 00:48:17.411 fused_ordering(219) 00:48:17.411 fused_ordering(220) 00:48:17.411 fused_ordering(221) 00:48:17.411 fused_ordering(222) 00:48:17.411 fused_ordering(223) 00:48:17.411 fused_ordering(224) 00:48:17.411 fused_ordering(225) 00:48:17.411 fused_ordering(226) 00:48:17.411 fused_ordering(227) 00:48:17.411 fused_ordering(228) 00:48:17.411 fused_ordering(229) 00:48:17.411 fused_ordering(230) 00:48:17.411 fused_ordering(231) 00:48:17.411 fused_ordering(232) 00:48:17.411 fused_ordering(233) 00:48:17.411 fused_ordering(234) 00:48:17.411 fused_ordering(235) 00:48:17.411 fused_ordering(236) 00:48:17.411 fused_ordering(237) 00:48:17.411 fused_ordering(238) 00:48:17.411 fused_ordering(239) 00:48:17.411 fused_ordering(240) 00:48:17.411 fused_ordering(241) 00:48:17.411 fused_ordering(242) 00:48:17.411 fused_ordering(243) 00:48:17.411 fused_ordering(244) 00:48:17.411 fused_ordering(245) 00:48:17.411 fused_ordering(246) 00:48:17.411 fused_ordering(247) 00:48:17.411 fused_ordering(248) 00:48:17.411 fused_ordering(249) 00:48:17.411 fused_ordering(250) 00:48:17.411 fused_ordering(251) 00:48:17.411 fused_ordering(252) 00:48:17.411 fused_ordering(253) 00:48:17.411 fused_ordering(254) 00:48:17.411 fused_ordering(255) 00:48:17.411 fused_ordering(256) 00:48:17.411 fused_ordering(257) 00:48:17.411 fused_ordering(258) 00:48:17.411 fused_ordering(259) 00:48:17.411 fused_ordering(260) 00:48:17.411 fused_ordering(261) 00:48:17.411 fused_ordering(262) 00:48:17.411 fused_ordering(263) 00:48:17.411 fused_ordering(264) 00:48:17.411 fused_ordering(265) 00:48:17.411 fused_ordering(266) 00:48:17.411 fused_ordering(267) 00:48:17.411 fused_ordering(268) 00:48:17.411 fused_ordering(269) 00:48:17.411 fused_ordering(270) 00:48:17.411 fused_ordering(271) 00:48:17.411 fused_ordering(272) 00:48:17.411 fused_ordering(273) 00:48:17.411 fused_ordering(274) 00:48:17.411 fused_ordering(275) 00:48:17.411 fused_ordering(276) 00:48:17.411 fused_ordering(277) 00:48:17.411 fused_ordering(278) 00:48:17.411 fused_ordering(279) 00:48:17.411 fused_ordering(280) 00:48:17.411 fused_ordering(281) 00:48:17.411 fused_ordering(282) 00:48:17.411 fused_ordering(283) 00:48:17.411 fused_ordering(284) 00:48:17.411 fused_ordering(285) 00:48:17.411 fused_ordering(286) 00:48:17.411 fused_ordering(287) 00:48:17.411 fused_ordering(288) 00:48:17.411 fused_ordering(289) 00:48:17.411 fused_ordering(290) 00:48:17.411 fused_ordering(291) 00:48:17.411 fused_ordering(292) 00:48:17.411 fused_ordering(293) 00:48:17.411 fused_ordering(294) 00:48:17.411 fused_ordering(295) 00:48:17.411 fused_ordering(296) 00:48:17.411 fused_ordering(297) 00:48:17.411 fused_ordering(298) 00:48:17.411 fused_ordering(299) 00:48:17.411 fused_ordering(300) 00:48:17.411 fused_ordering(301) 00:48:17.411 fused_ordering(302) 00:48:17.411 fused_ordering(303) 00:48:17.411 fused_ordering(304) 00:48:17.411 fused_ordering(305) 00:48:17.411 fused_ordering(306) 00:48:17.411 fused_ordering(307) 00:48:17.411 fused_ordering(308) 00:48:17.411 fused_ordering(309) 00:48:17.411 fused_ordering(310) 00:48:17.411 fused_ordering(311) 00:48:17.411 fused_ordering(312) 00:48:17.411 fused_ordering(313) 00:48:17.411 fused_ordering(314) 00:48:17.411 fused_ordering(315) 00:48:17.411 fused_ordering(316) 00:48:17.411 fused_ordering(317) 00:48:17.411 fused_ordering(318) 00:48:17.411 fused_ordering(319) 00:48:17.411 fused_ordering(320) 00:48:17.411 fused_ordering(321) 00:48:17.411 fused_ordering(322) 00:48:17.411 fused_ordering(323) 00:48:17.411 fused_ordering(324) 00:48:17.411 fused_ordering(325) 00:48:17.411 fused_ordering(326) 00:48:17.411 fused_ordering(327) 00:48:17.411 fused_ordering(328) 00:48:17.411 fused_ordering(329) 00:48:17.411 fused_ordering(330) 00:48:17.411 fused_ordering(331) 00:48:17.411 fused_ordering(332) 00:48:17.411 fused_ordering(333) 00:48:17.411 fused_ordering(334) 00:48:17.411 fused_ordering(335) 00:48:17.411 fused_ordering(336) 00:48:17.411 fused_ordering(337) 00:48:17.411 fused_ordering(338) 00:48:17.411 fused_ordering(339) 00:48:17.411 fused_ordering(340) 00:48:17.411 fused_ordering(341) 00:48:17.411 fused_ordering(342) 00:48:17.411 fused_ordering(343) 00:48:17.411 fused_ordering(344) 00:48:17.411 fused_ordering(345) 00:48:17.411 fused_ordering(346) 00:48:17.411 fused_ordering(347) 00:48:17.411 fused_ordering(348) 00:48:17.411 fused_ordering(349) 00:48:17.411 fused_ordering(350) 00:48:17.411 fused_ordering(351) 00:48:17.411 fused_ordering(352) 00:48:17.411 fused_ordering(353) 00:48:17.411 fused_ordering(354) 00:48:17.411 fused_ordering(355) 00:48:17.411 fused_ordering(356) 00:48:17.411 fused_ordering(357) 00:48:17.411 fused_ordering(358) 00:48:17.411 fused_ordering(359) 00:48:17.411 fused_ordering(360) 00:48:17.411 fused_ordering(361) 00:48:17.411 fused_ordering(362) 00:48:17.411 fused_ordering(363) 00:48:17.411 fused_ordering(364) 00:48:17.411 fused_ordering(365) 00:48:17.411 fused_ordering(366) 00:48:17.411 fused_ordering(367) 00:48:17.411 fused_ordering(368) 00:48:17.411 fused_ordering(369) 00:48:17.412 fused_ordering(370) 00:48:17.412 fused_ordering(371) 00:48:17.412 fused_ordering(372) 00:48:17.412 fused_ordering(373) 00:48:17.412 fused_ordering(374) 00:48:17.412 fused_ordering(375) 00:48:17.412 fused_ordering(376) 00:48:17.412 fused_ordering(377) 00:48:17.412 fused_ordering(378) 00:48:17.412 fused_ordering(379) 00:48:17.412 fused_ordering(380) 00:48:17.412 fused_ordering(381) 00:48:17.412 fused_ordering(382) 00:48:17.412 fused_ordering(383) 00:48:17.412 fused_ordering(384) 00:48:17.412 fused_ordering(385) 00:48:17.412 fused_ordering(386) 00:48:17.412 fused_ordering(387) 00:48:17.412 fused_ordering(388) 00:48:17.412 fused_ordering(389) 00:48:17.412 fused_ordering(390) 00:48:17.412 fused_ordering(391) 00:48:17.412 fused_ordering(392) 00:48:17.412 fused_ordering(393) 00:48:17.412 fused_ordering(394) 00:48:17.412 fused_ordering(395) 00:48:17.412 fused_ordering(396) 00:48:17.412 fused_ordering(397) 00:48:17.412 fused_ordering(398) 00:48:17.412 fused_ordering(399) 00:48:17.412 fused_ordering(400) 00:48:17.412 fused_ordering(401) 00:48:17.412 fused_ordering(402) 00:48:17.412 fused_ordering(403) 00:48:17.412 fused_ordering(404) 00:48:17.412 fused_ordering(405) 00:48:17.412 fused_ordering(406) 00:48:17.412 fused_ordering(407) 00:48:17.412 fused_ordering(408) 00:48:17.412 fused_ordering(409) 00:48:17.412 fused_ordering(410) 00:48:17.980 fused_ordering(411) 00:48:17.980 fused_ordering(412) 00:48:17.980 fused_ordering(413) 00:48:17.980 fused_ordering(414) 00:48:17.980 fused_ordering(415) 00:48:17.980 fused_ordering(416) 00:48:17.980 fused_ordering(417) 00:48:17.980 fused_ordering(418) 00:48:17.980 fused_ordering(419) 00:48:17.980 fused_ordering(420) 00:48:17.980 fused_ordering(421) 00:48:17.980 fused_ordering(422) 00:48:17.980 fused_ordering(423) 00:48:17.980 fused_ordering(424) 00:48:17.980 fused_ordering(425) 00:48:17.980 fused_ordering(426) 00:48:17.980 fused_ordering(427) 00:48:17.980 fused_ordering(428) 00:48:17.980 fused_ordering(429) 00:48:17.980 fused_ordering(430) 00:48:17.980 fused_ordering(431) 00:48:17.980 fused_ordering(432) 00:48:17.980 fused_ordering(433) 00:48:17.980 fused_ordering(434) 00:48:17.980 fused_ordering(435) 00:48:17.980 fused_ordering(436) 00:48:17.980 fused_ordering(437) 00:48:17.980 fused_ordering(438) 00:48:17.980 fused_ordering(439) 00:48:17.980 fused_ordering(440) 00:48:17.980 fused_ordering(441) 00:48:17.980 fused_ordering(442) 00:48:17.980 fused_ordering(443) 00:48:17.980 fused_ordering(444) 00:48:17.980 fused_ordering(445) 00:48:17.980 fused_ordering(446) 00:48:17.980 fused_ordering(447) 00:48:17.980 fused_ordering(448) 00:48:17.980 fused_ordering(449) 00:48:17.980 fused_ordering(450) 00:48:17.980 fused_ordering(451) 00:48:17.980 fused_ordering(452) 00:48:17.980 fused_ordering(453) 00:48:17.980 fused_ordering(454) 00:48:17.980 fused_ordering(455) 00:48:17.980 fused_ordering(456) 00:48:17.980 fused_ordering(457) 00:48:17.980 fused_ordering(458) 00:48:17.980 fused_ordering(459) 00:48:17.980 fused_ordering(460) 00:48:17.980 fused_ordering(461) 00:48:17.980 fused_ordering(462) 00:48:17.980 fused_ordering(463) 00:48:17.980 fused_ordering(464) 00:48:17.980 fused_ordering(465) 00:48:17.980 fused_ordering(466) 00:48:17.980 fused_ordering(467) 00:48:17.980 fused_ordering(468) 00:48:17.980 fused_ordering(469) 00:48:17.980 fused_ordering(470) 00:48:17.980 fused_ordering(471) 00:48:17.980 fused_ordering(472) 00:48:17.981 fused_ordering(473) 00:48:17.981 fused_ordering(474) 00:48:17.981 fused_ordering(475) 00:48:17.981 fused_ordering(476) 00:48:17.981 fused_ordering(477) 00:48:17.981 fused_ordering(478) 00:48:17.981 fused_ordering(479) 00:48:17.981 fused_ordering(480) 00:48:17.981 fused_ordering(481) 00:48:17.981 fused_ordering(482) 00:48:17.981 fused_ordering(483) 00:48:17.981 fused_ordering(484) 00:48:17.981 fused_ordering(485) 00:48:17.981 fused_ordering(486) 00:48:17.981 fused_ordering(487) 00:48:17.981 fused_ordering(488) 00:48:17.981 fused_ordering(489) 00:48:17.981 fused_ordering(490) 00:48:17.981 fused_ordering(491) 00:48:17.981 fused_ordering(492) 00:48:17.981 fused_ordering(493) 00:48:17.981 fused_ordering(494) 00:48:17.981 fused_ordering(495) 00:48:17.981 fused_ordering(496) 00:48:17.981 fused_ordering(497) 00:48:17.981 fused_ordering(498) 00:48:17.981 fused_ordering(499) 00:48:17.981 fused_ordering(500) 00:48:17.981 fused_ordering(501) 00:48:17.981 fused_ordering(502) 00:48:17.981 fused_ordering(503) 00:48:17.981 fused_ordering(504) 00:48:17.981 fused_ordering(505) 00:48:17.981 fused_ordering(506) 00:48:17.981 fused_ordering(507) 00:48:17.981 fused_ordering(508) 00:48:17.981 fused_ordering(509) 00:48:17.981 fused_ordering(510) 00:48:17.981 fused_ordering(511) 00:48:17.981 fused_ordering(512) 00:48:17.981 fused_ordering(513) 00:48:17.981 fused_ordering(514) 00:48:17.981 fused_ordering(515) 00:48:17.981 fused_ordering(516) 00:48:17.981 fused_ordering(517) 00:48:17.981 fused_ordering(518) 00:48:17.981 fused_ordering(519) 00:48:17.981 fused_ordering(520) 00:48:17.981 fused_ordering(521) 00:48:17.981 fused_ordering(522) 00:48:17.981 fused_ordering(523) 00:48:17.981 fused_ordering(524) 00:48:17.981 fused_ordering(525) 00:48:17.981 fused_ordering(526) 00:48:17.981 fused_ordering(527) 00:48:17.981 fused_ordering(528) 00:48:17.981 fused_ordering(529) 00:48:17.981 fused_ordering(530) 00:48:17.981 fused_ordering(531) 00:48:17.981 fused_ordering(532) 00:48:17.981 fused_ordering(533) 00:48:17.981 fused_ordering(534) 00:48:17.981 fused_ordering(535) 00:48:17.981 fused_ordering(536) 00:48:17.981 fused_ordering(537) 00:48:17.981 fused_ordering(538) 00:48:17.981 fused_ordering(539) 00:48:17.981 fused_ordering(540) 00:48:17.981 fused_ordering(541) 00:48:17.981 fused_ordering(542) 00:48:17.981 fused_ordering(543) 00:48:17.981 fused_ordering(544) 00:48:17.981 fused_ordering(545) 00:48:17.981 fused_ordering(546) 00:48:17.981 fused_ordering(547) 00:48:17.981 fused_ordering(548) 00:48:17.981 fused_ordering(549) 00:48:17.981 fused_ordering(550) 00:48:17.981 fused_ordering(551) 00:48:17.981 fused_ordering(552) 00:48:17.981 fused_ordering(553) 00:48:17.981 fused_ordering(554) 00:48:17.981 fused_ordering(555) 00:48:17.981 fused_ordering(556) 00:48:17.981 fused_ordering(557) 00:48:17.981 fused_ordering(558) 00:48:17.981 fused_ordering(559) 00:48:17.981 fused_ordering(560) 00:48:17.981 fused_ordering(561) 00:48:17.981 fused_ordering(562) 00:48:17.981 fused_ordering(563) 00:48:17.981 fused_ordering(564) 00:48:17.981 fused_ordering(565) 00:48:17.981 fused_ordering(566) 00:48:17.981 fused_ordering(567) 00:48:17.981 fused_ordering(568) 00:48:17.981 fused_ordering(569) 00:48:17.981 fused_ordering(570) 00:48:17.981 fused_ordering(571) 00:48:17.981 fused_ordering(572) 00:48:17.981 fused_ordering(573) 00:48:17.981 fused_ordering(574) 00:48:17.981 fused_ordering(575) 00:48:17.981 fused_ordering(576) 00:48:17.981 fused_ordering(577) 00:48:17.981 fused_ordering(578) 00:48:17.981 fused_ordering(579) 00:48:17.981 fused_ordering(580) 00:48:17.981 fused_ordering(581) 00:48:17.981 fused_ordering(582) 00:48:17.981 fused_ordering(583) 00:48:17.981 fused_ordering(584) 00:48:17.981 fused_ordering(585) 00:48:17.981 fused_ordering(586) 00:48:17.981 fused_ordering(587) 00:48:17.981 fused_ordering(588) 00:48:17.981 fused_ordering(589) 00:48:17.981 fused_ordering(590) 00:48:17.981 fused_ordering(591) 00:48:17.981 fused_ordering(592) 00:48:17.981 fused_ordering(593) 00:48:17.981 fused_ordering(594) 00:48:17.981 fused_ordering(595) 00:48:17.981 fused_ordering(596) 00:48:17.981 fused_ordering(597) 00:48:17.981 fused_ordering(598) 00:48:17.981 fused_ordering(599) 00:48:17.981 fused_ordering(600) 00:48:17.981 fused_ordering(601) 00:48:17.981 fused_ordering(602) 00:48:17.981 fused_ordering(603) 00:48:17.981 fused_ordering(604) 00:48:17.981 fused_ordering(605) 00:48:17.981 fused_ordering(606) 00:48:17.981 fused_ordering(607) 00:48:17.981 fused_ordering(608) 00:48:17.981 fused_ordering(609) 00:48:17.981 fused_ordering(610) 00:48:17.981 fused_ordering(611) 00:48:17.981 fused_ordering(612) 00:48:17.981 fused_ordering(613) 00:48:17.981 fused_ordering(614) 00:48:17.981 fused_ordering(615) 00:48:18.244 fused_ordering(616) 00:48:18.244 fused_ordering(617) 00:48:18.244 fused_ordering(618) 00:48:18.244 fused_ordering(619) 00:48:18.244 fused_ordering(620) 00:48:18.244 fused_ordering(621) 00:48:18.244 fused_ordering(622) 00:48:18.244 fused_ordering(623) 00:48:18.244 fused_ordering(624) 00:48:18.244 fused_ordering(625) 00:48:18.244 fused_ordering(626) 00:48:18.244 fused_ordering(627) 00:48:18.244 fused_ordering(628) 00:48:18.244 fused_ordering(629) 00:48:18.244 fused_ordering(630) 00:48:18.244 fused_ordering(631) 00:48:18.244 fused_ordering(632) 00:48:18.244 fused_ordering(633) 00:48:18.244 fused_ordering(634) 00:48:18.244 fused_ordering(635) 00:48:18.244 fused_ordering(636) 00:48:18.244 fused_ordering(637) 00:48:18.244 fused_ordering(638) 00:48:18.244 fused_ordering(639) 00:48:18.244 fused_ordering(640) 00:48:18.244 fused_ordering(641) 00:48:18.244 fused_ordering(642) 00:48:18.244 fused_ordering(643) 00:48:18.244 fused_ordering(644) 00:48:18.244 fused_ordering(645) 00:48:18.244 fused_ordering(646) 00:48:18.244 fused_ordering(647) 00:48:18.244 fused_ordering(648) 00:48:18.244 fused_ordering(649) 00:48:18.244 fused_ordering(650) 00:48:18.244 fused_ordering(651) 00:48:18.244 fused_ordering(652) 00:48:18.244 fused_ordering(653) 00:48:18.244 fused_ordering(654) 00:48:18.244 fused_ordering(655) 00:48:18.244 fused_ordering(656) 00:48:18.244 fused_ordering(657) 00:48:18.244 fused_ordering(658) 00:48:18.244 fused_ordering(659) 00:48:18.244 fused_ordering(660) 00:48:18.244 fused_ordering(661) 00:48:18.244 fused_ordering(662) 00:48:18.244 fused_ordering(663) 00:48:18.244 fused_ordering(664) 00:48:18.244 fused_ordering(665) 00:48:18.244 fused_ordering(666) 00:48:18.244 fused_ordering(667) 00:48:18.244 fused_ordering(668) 00:48:18.244 fused_ordering(669) 00:48:18.244 fused_ordering(670) 00:48:18.244 fused_ordering(671) 00:48:18.244 fused_ordering(672) 00:48:18.244 fused_ordering(673) 00:48:18.244 fused_ordering(674) 00:48:18.244 fused_ordering(675) 00:48:18.244 fused_ordering(676) 00:48:18.244 fused_ordering(677) 00:48:18.244 fused_ordering(678) 00:48:18.244 fused_ordering(679) 00:48:18.244 fused_ordering(680) 00:48:18.244 fused_ordering(681) 00:48:18.244 fused_ordering(682) 00:48:18.244 fused_ordering(683) 00:48:18.244 fused_ordering(684) 00:48:18.244 fused_ordering(685) 00:48:18.244 fused_ordering(686) 00:48:18.244 fused_ordering(687) 00:48:18.244 fused_ordering(688) 00:48:18.244 fused_ordering(689) 00:48:18.244 fused_ordering(690) 00:48:18.244 fused_ordering(691) 00:48:18.244 fused_ordering(692) 00:48:18.244 fused_ordering(693) 00:48:18.244 fused_ordering(694) 00:48:18.244 fused_ordering(695) 00:48:18.244 fused_ordering(696) 00:48:18.244 fused_ordering(697) 00:48:18.244 fused_ordering(698) 00:48:18.244 fused_ordering(699) 00:48:18.244 fused_ordering(700) 00:48:18.244 fused_ordering(701) 00:48:18.244 fused_ordering(702) 00:48:18.244 fused_ordering(703) 00:48:18.244 fused_ordering(704) 00:48:18.244 fused_ordering(705) 00:48:18.244 fused_ordering(706) 00:48:18.244 fused_ordering(707) 00:48:18.244 fused_ordering(708) 00:48:18.244 fused_ordering(709) 00:48:18.244 fused_ordering(710) 00:48:18.244 fused_ordering(711) 00:48:18.244 fused_ordering(712) 00:48:18.244 fused_ordering(713) 00:48:18.244 fused_ordering(714) 00:48:18.244 fused_ordering(715) 00:48:18.244 fused_ordering(716) 00:48:18.244 fused_ordering(717) 00:48:18.244 fused_ordering(718) 00:48:18.244 fused_ordering(719) 00:48:18.244 fused_ordering(720) 00:48:18.244 fused_ordering(721) 00:48:18.244 fused_ordering(722) 00:48:18.244 fused_ordering(723) 00:48:18.244 fused_ordering(724) 00:48:18.244 fused_ordering(725) 00:48:18.244 fused_ordering(726) 00:48:18.244 fused_ordering(727) 00:48:18.244 fused_ordering(728) 00:48:18.244 fused_ordering(729) 00:48:18.244 fused_ordering(730) 00:48:18.244 fused_ordering(731) 00:48:18.244 fused_ordering(732) 00:48:18.244 fused_ordering(733) 00:48:18.244 fused_ordering(734) 00:48:18.244 fused_ordering(735) 00:48:18.244 fused_ordering(736) 00:48:18.244 fused_ordering(737) 00:48:18.244 fused_ordering(738) 00:48:18.244 fused_ordering(739) 00:48:18.244 fused_ordering(740) 00:48:18.244 fused_ordering(741) 00:48:18.244 fused_ordering(742) 00:48:18.244 fused_ordering(743) 00:48:18.244 fused_ordering(744) 00:48:18.244 fused_ordering(745) 00:48:18.244 fused_ordering(746) 00:48:18.244 fused_ordering(747) 00:48:18.244 fused_ordering(748) 00:48:18.244 fused_ordering(749) 00:48:18.244 fused_ordering(750) 00:48:18.244 fused_ordering(751) 00:48:18.244 fused_ordering(752) 00:48:18.244 fused_ordering(753) 00:48:18.244 fused_ordering(754) 00:48:18.244 fused_ordering(755) 00:48:18.244 fused_ordering(756) 00:48:18.244 fused_ordering(757) 00:48:18.244 fused_ordering(758) 00:48:18.244 fused_ordering(759) 00:48:18.244 fused_ordering(760) 00:48:18.244 fused_ordering(761) 00:48:18.244 fused_ordering(762) 00:48:18.244 fused_ordering(763) 00:48:18.244 fused_ordering(764) 00:48:18.244 fused_ordering(765) 00:48:18.244 fused_ordering(766) 00:48:18.244 fused_ordering(767) 00:48:18.244 fused_ordering(768) 00:48:18.244 fused_ordering(769) 00:48:18.244 fused_ordering(770) 00:48:18.244 fused_ordering(771) 00:48:18.244 fused_ordering(772) 00:48:18.244 fused_ordering(773) 00:48:18.244 fused_ordering(774) 00:48:18.244 fused_ordering(775) 00:48:18.244 fused_ordering(776) 00:48:18.244 fused_ordering(777) 00:48:18.244 fused_ordering(778) 00:48:18.244 fused_ordering(779) 00:48:18.244 fused_ordering(780) 00:48:18.244 fused_ordering(781) 00:48:18.244 fused_ordering(782) 00:48:18.244 fused_ordering(783) 00:48:18.244 fused_ordering(784) 00:48:18.244 fused_ordering(785) 00:48:18.244 fused_ordering(786) 00:48:18.244 fused_ordering(787) 00:48:18.244 fused_ordering(788) 00:48:18.244 fused_ordering(789) 00:48:18.244 fused_ordering(790) 00:48:18.244 fused_ordering(791) 00:48:18.244 fused_ordering(792) 00:48:18.244 fused_ordering(793) 00:48:18.244 fused_ordering(794) 00:48:18.244 fused_ordering(795) 00:48:18.244 fused_ordering(796) 00:48:18.244 fused_ordering(797) 00:48:18.244 fused_ordering(798) 00:48:18.244 fused_ordering(799) 00:48:18.244 fused_ordering(800) 00:48:18.244 fused_ordering(801) 00:48:18.244 fused_ordering(802) 00:48:18.244 fused_ordering(803) 00:48:18.244 fused_ordering(804) 00:48:18.244 fused_ordering(805) 00:48:18.244 fused_ordering(806) 00:48:18.244 fused_ordering(807) 00:48:18.244 fused_ordering(808) 00:48:18.244 fused_ordering(809) 00:48:18.244 fused_ordering(810) 00:48:18.244 fused_ordering(811) 00:48:18.244 fused_ordering(812) 00:48:18.244 fused_ordering(813) 00:48:18.244 fused_ordering(814) 00:48:18.244 fused_ordering(815) 00:48:18.244 fused_ordering(816) 00:48:18.244 fused_ordering(817) 00:48:18.244 fused_ordering(818) 00:48:18.244 fused_ordering(819) 00:48:18.244 fused_ordering(820) 00:48:18.813 fused_ordering(821) 00:48:18.813 fused_ordering(822) 00:48:18.813 fused_ordering(823) 00:48:18.813 fused_ordering(824) 00:48:18.813 fused_ordering(825) 00:48:18.813 fused_ordering(826) 00:48:18.813 fused_ordering(827) 00:48:18.813 fused_ordering(828) 00:48:18.813 fused_ordering(829) 00:48:18.813 fused_ordering(830) 00:48:18.813 fused_ordering(831) 00:48:18.813 fused_ordering(832) 00:48:18.813 fused_ordering(833) 00:48:18.813 fused_ordering(834) 00:48:18.813 fused_ordering(835) 00:48:18.813 fused_ordering(836) 00:48:18.813 fused_ordering(837) 00:48:18.813 fused_ordering(838) 00:48:18.813 fused_ordering(839) 00:48:18.813 fused_ordering(840) 00:48:18.813 fused_ordering(841) 00:48:18.813 fused_ordering(842) 00:48:18.813 fused_ordering(843) 00:48:18.813 fused_ordering(844) 00:48:18.813 fused_ordering(845) 00:48:18.813 fused_ordering(846) 00:48:18.813 fused_ordering(847) 00:48:18.813 fused_ordering(848) 00:48:18.813 fused_ordering(849) 00:48:18.813 fused_ordering(850) 00:48:18.813 fused_ordering(851) 00:48:18.813 fused_ordering(852) 00:48:18.813 fused_ordering(853) 00:48:18.813 fused_ordering(854) 00:48:18.813 fused_ordering(855) 00:48:18.813 fused_ordering(856) 00:48:18.813 fused_ordering(857) 00:48:18.813 fused_ordering(858) 00:48:18.813 fused_ordering(859) 00:48:18.813 fused_ordering(860) 00:48:18.813 fused_ordering(861) 00:48:18.813 fused_ordering(862) 00:48:18.813 fused_ordering(863) 00:48:18.813 fused_ordering(864) 00:48:18.813 fused_ordering(865) 00:48:18.813 fused_ordering(866) 00:48:18.813 fused_ordering(867) 00:48:18.813 fused_ordering(868) 00:48:18.813 fused_ordering(869) 00:48:18.813 fused_ordering(870) 00:48:18.813 fused_ordering(871) 00:48:18.813 fused_ordering(872) 00:48:18.813 fused_ordering(873) 00:48:18.813 fused_ordering(874) 00:48:18.813 fused_ordering(875) 00:48:18.813 fused_ordering(876) 00:48:18.813 fused_ordering(877) 00:48:18.813 fused_ordering(878) 00:48:18.813 fused_ordering(879) 00:48:18.813 fused_ordering(880) 00:48:18.813 fused_ordering(881) 00:48:18.813 fused_ordering(882) 00:48:18.813 fused_ordering(883) 00:48:18.813 fused_ordering(884) 00:48:18.813 fused_ordering(885) 00:48:18.813 fused_ordering(886) 00:48:18.813 fused_ordering(887) 00:48:18.813 fused_ordering(888) 00:48:18.813 fused_ordering(889) 00:48:18.813 fused_ordering(890) 00:48:18.813 fused_ordering(891) 00:48:18.813 fused_ordering(892) 00:48:18.813 fused_ordering(893) 00:48:18.813 fused_ordering(894) 00:48:18.813 fused_ordering(895) 00:48:18.813 fused_ordering(896) 00:48:18.813 fused_ordering(897) 00:48:18.813 fused_ordering(898) 00:48:18.813 fused_ordering(899) 00:48:18.813 fused_ordering(900) 00:48:18.813 fused_ordering(901) 00:48:18.813 fused_ordering(902) 00:48:18.813 fused_ordering(903) 00:48:18.813 fused_ordering(904) 00:48:18.813 fused_ordering(905) 00:48:18.813 fused_ordering(906) 00:48:18.813 fused_ordering(907) 00:48:18.813 fused_ordering(908) 00:48:18.813 fused_ordering(909) 00:48:18.813 fused_ordering(910) 00:48:18.813 fused_ordering(911) 00:48:18.813 fused_ordering(912) 00:48:18.813 fused_ordering(913) 00:48:18.813 fused_ordering(914) 00:48:18.813 fused_ordering(915) 00:48:18.813 fused_ordering(916) 00:48:18.813 fused_ordering(917) 00:48:18.813 fused_ordering(918) 00:48:18.813 fused_ordering(919) 00:48:18.813 fused_ordering(920) 00:48:18.813 fused_ordering(921) 00:48:18.813 fused_ordering(922) 00:48:18.813 fused_ordering(923) 00:48:18.813 fused_ordering(924) 00:48:18.813 fused_ordering(925) 00:48:18.813 fused_ordering(926) 00:48:18.813 fused_ordering(927) 00:48:18.813 fused_ordering(928) 00:48:18.813 fused_ordering(929) 00:48:18.813 fused_ordering(930) 00:48:18.813 fused_ordering(931) 00:48:18.813 fused_ordering(932) 00:48:18.813 fused_ordering(933) 00:48:18.813 fused_ordering(934) 00:48:18.813 fused_ordering(935) 00:48:18.813 fused_ordering(936) 00:48:18.813 fused_ordering(937) 00:48:18.813 fused_ordering(938) 00:48:18.813 fused_ordering(939) 00:48:18.813 fused_ordering(940) 00:48:18.813 fused_ordering(941) 00:48:18.813 fused_ordering(942) 00:48:18.813 fused_ordering(943) 00:48:18.813 fused_ordering(944) 00:48:18.813 fused_ordering(945) 00:48:18.813 fused_ordering(946) 00:48:18.813 fused_ordering(947) 00:48:18.813 fused_ordering(948) 00:48:18.813 fused_ordering(949) 00:48:18.813 fused_ordering(950) 00:48:18.813 fused_ordering(951) 00:48:18.813 fused_ordering(952) 00:48:18.813 fused_ordering(953) 00:48:18.813 fused_ordering(954) 00:48:18.813 fused_ordering(955) 00:48:18.813 fused_ordering(956) 00:48:18.813 fused_ordering(957) 00:48:18.813 fused_ordering(958) 00:48:18.813 fused_ordering(959) 00:48:18.813 fused_ordering(960) 00:48:18.813 fused_ordering(961) 00:48:18.813 fused_ordering(962) 00:48:18.813 fused_ordering(963) 00:48:18.813 fused_ordering(964) 00:48:18.813 fused_ordering(965) 00:48:18.813 fused_ordering(966) 00:48:18.813 fused_ordering(967) 00:48:18.813 fused_ordering(968) 00:48:18.813 fused_ordering(969) 00:48:18.813 fused_ordering(970) 00:48:18.813 fused_ordering(971) 00:48:18.813 fused_ordering(972) 00:48:18.813 fused_ordering(973) 00:48:18.813 fused_ordering(974) 00:48:18.813 fused_ordering(975) 00:48:18.813 fused_ordering(976) 00:48:18.813 fused_ordering(977) 00:48:18.813 fused_ordering(978) 00:48:18.813 fused_ordering(979) 00:48:18.813 fused_ordering(980) 00:48:18.813 fused_ordering(981) 00:48:18.813 fused_ordering(982) 00:48:18.813 fused_ordering(983) 00:48:18.813 fused_ordering(984) 00:48:18.813 fused_ordering(985) 00:48:18.813 fused_ordering(986) 00:48:18.813 fused_ordering(987) 00:48:18.813 fused_ordering(988) 00:48:18.813 fused_ordering(989) 00:48:18.813 fused_ordering(990) 00:48:18.813 fused_ordering(991) 00:48:18.813 fused_ordering(992) 00:48:18.813 fused_ordering(993) 00:48:18.813 fused_ordering(994) 00:48:18.813 fused_ordering(995) 00:48:18.813 fused_ordering(996) 00:48:18.813 fused_ordering(997) 00:48:18.813 fused_ordering(998) 00:48:18.813 fused_ordering(999) 00:48:18.813 fused_ordering(1000) 00:48:18.813 fused_ordering(1001) 00:48:18.813 fused_ordering(1002) 00:48:18.814 fused_ordering(1003) 00:48:18.814 fused_ordering(1004) 00:48:18.814 fused_ordering(1005) 00:48:18.814 fused_ordering(1006) 00:48:18.814 fused_ordering(1007) 00:48:18.814 fused_ordering(1008) 00:48:18.814 fused_ordering(1009) 00:48:18.814 fused_ordering(1010) 00:48:18.814 fused_ordering(1011) 00:48:18.814 fused_ordering(1012) 00:48:18.814 fused_ordering(1013) 00:48:18.814 fused_ordering(1014) 00:48:18.814 fused_ordering(1015) 00:48:18.814 fused_ordering(1016) 00:48:18.814 fused_ordering(1017) 00:48:18.814 fused_ordering(1018) 00:48:18.814 fused_ordering(1019) 00:48:18.814 fused_ordering(1020) 00:48:18.814 fused_ordering(1021) 00:48:18.814 fused_ordering(1022) 00:48:18.814 fused_ordering(1023) 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:18.814 rmmod nvme_tcp 00:48:18.814 rmmod nvme_fabrics 00:48:18.814 rmmod nvme_keyring 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 76037 ']' 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 76037 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 76037 ']' 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 76037 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76037 00:48:18.814 killing process with pid 76037 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76037' 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 76037 00:48:18.814 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 76037 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:48:19.073 09:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:48:19.073 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:48:19.073 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:48:19.073 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:48:19.073 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:48:19.073 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:48:19.073 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:48:19.073 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:48:19.073 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:19.332 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:19.332 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:48:19.332 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:19.332 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:19.332 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:19.332 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:48:19.332 00:48:19.332 real 0m4.473s 00:48:19.332 user 0m4.689s 00:48:19.332 sys 0m1.717s 00:48:19.332 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:19.332 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:48:19.332 ************************************ 00:48:19.332 END TEST nvmf_fused_ordering 00:48:19.332 ************************************ 00:48:19.333 09:59:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:48:19.333 09:59:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:48:19.333 09:59:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:19.333 09:59:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:48:19.333 ************************************ 00:48:19.333 START TEST nvmf_ns_masking 00:48:19.333 ************************************ 00:48:19.333 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:48:19.592 * Looking for test storage... 00:48:19.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:48:19.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:19.592 --rc genhtml_branch_coverage=1 00:48:19.592 --rc genhtml_function_coverage=1 00:48:19.592 --rc genhtml_legend=1 00:48:19.592 --rc geninfo_all_blocks=1 00:48:19.592 --rc geninfo_unexecuted_blocks=1 00:48:19.592 00:48:19.592 ' 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:48:19.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:19.592 --rc genhtml_branch_coverage=1 00:48:19.592 --rc genhtml_function_coverage=1 00:48:19.592 --rc genhtml_legend=1 00:48:19.592 --rc geninfo_all_blocks=1 00:48:19.592 --rc geninfo_unexecuted_blocks=1 00:48:19.592 00:48:19.592 ' 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:48:19.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:19.592 --rc genhtml_branch_coverage=1 00:48:19.592 --rc genhtml_function_coverage=1 00:48:19.592 --rc genhtml_legend=1 00:48:19.592 --rc geninfo_all_blocks=1 00:48:19.592 --rc geninfo_unexecuted_blocks=1 00:48:19.592 00:48:19.592 ' 00:48:19.592 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:48:19.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:19.592 --rc genhtml_branch_coverage=1 00:48:19.592 --rc genhtml_function_coverage=1 00:48:19.593 --rc genhtml_legend=1 00:48:19.593 --rc geninfo_all_blocks=1 00:48:19.593 --rc geninfo_unexecuted_blocks=1 00:48:19.593 00:48:19.593 ' 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:19.593 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=153e0603-2542-4417-aa5d-b2707fcfd725 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a345ea9c-4aa6-4d29-9187-a249110a96b7 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1037529d-5b56-41aa-bc42-867388f457f7 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:48:19.593 Cannot find device "nvmf_init_br" 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:48:19.593 Cannot find device "nvmf_init_br2" 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:48:19.593 Cannot find device "nvmf_tgt_br" 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:48:19.593 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:48:19.854 Cannot find device "nvmf_tgt_br2" 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:48:19.854 Cannot find device "nvmf_init_br" 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:48:19.854 Cannot find device "nvmf_init_br2" 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:48:19.854 Cannot find device "nvmf_tgt_br" 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:48:19.854 Cannot find device "nvmf_tgt_br2" 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:48:19.854 Cannot find device "nvmf_br" 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:48:19.854 Cannot find device "nvmf_init_if" 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:48:19.854 Cannot find device "nvmf_init_if2" 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:19.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:19.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:48:19.854 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:48:20.113 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:48:20.113 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:20.113 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.135 ms 00:48:20.113 00:48:20.114 --- 10.0.0.3 ping statistics --- 00:48:20.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:20.114 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:48:20.114 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:48:20.114 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:48:20.114 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:48:20.114 00:48:20.114 --- 10.0.0.4 ping statistics --- 00:48:20.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:20.114 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:48:20.114 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:20.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:20.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:48:20.114 00:48:20.114 --- 10.0.0.1 ping statistics --- 00:48:20.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:20.114 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:48:20.114 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:48:20.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:20.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:48:20.114 00:48:20.114 --- 10.0.0.2 ping statistics --- 00:48:20.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:20.114 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=76356 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 76356 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 76356 ']' 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:20.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:20.114 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:48:20.114 [2024-12-09 09:59:27.116874] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:48:20.114 [2024-12-09 09:59:27.116968] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:20.373 [2024-12-09 09:59:27.270848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:20.373 [2024-12-09 09:59:27.349875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:20.373 [2024-12-09 09:59:27.349939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:20.373 [2024-12-09 09:59:27.349947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:20.373 [2024-12-09 09:59:27.349953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:20.373 [2024-12-09 09:59:27.349959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:20.373 [2024-12-09 09:59:27.350342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:21.311 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:21.311 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:48:21.311 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:21.311 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:21.311 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:48:21.311 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:21.311 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:48:21.570 [2024-12-09 09:59:28.388853] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:21.570 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:48:21.570 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:48:21.570 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:48:21.830 Malloc1 00:48:21.830 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:48:22.089 Malloc2 00:48:22.089 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:48:22.348 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:48:22.608 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:48:22.867 [2024-12-09 09:59:29.747023] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:22.867 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:48:22.867 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1037529d-5b56-41aa-bc42-867388f457f7 -a 10.0.0.3 -s 4420 -i 4 00:48:22.867 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:48:22.867 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:48:22.867 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:48:22.867 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:48:22.867 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:25.403 [ 0]:0x1 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:25.403 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8845abc302e140868024bacaeefa79cc 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8845abc302e140868024bacaeefa79cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:48:25.403 [ 0]:0x1 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8845abc302e140868024bacaeefa79cc 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8845abc302e140868024bacaeefa79cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:48:25.403 [ 1]:0x2 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=faa46f60d16e49b7996c26b4e6cc9019 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ faa46f60d16e49b7996c26b4e6cc9019 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:48:25.403 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:48:25.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:48:25.662 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:48:25.921 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:48:26.179 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:48:26.179 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1037529d-5b56-41aa-bc42-867388f457f7 -a 10.0.0.3 -s 4420 -i 4 00:48:26.179 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:48:26.179 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:48:26.179 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:48:26.179 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:48:26.179 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:48:26.179 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:48:28.079 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:48:28.079 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:48:28.079 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:48:28.079 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:48:28.079 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:48:28.079 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:48:28.336 [ 0]:0x2 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=faa46f60d16e49b7996c26b4e6cc9019 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ faa46f60d16e49b7996c26b4e6cc9019 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:28.336 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:48:28.594 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:48:28.594 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:48:28.594 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:28.594 [ 0]:0x1 00:48:28.594 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:48:28.594 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:28.594 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8845abc302e140868024bacaeefa79cc 00:48:28.594 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8845abc302e140868024bacaeefa79cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:28.594 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:48:28.594 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:28.594 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:48:28.594 [ 1]:0x2 00:48:28.594 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:48:28.594 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:28.852 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=faa46f60d16e49b7996c26b4e6cc9019 00:48:28.852 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ faa46f60d16e49b7996c26b4e6cc9019 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:28.852 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:48:29.111 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:48:29.111 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:48:29.111 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:48:29.111 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:48:29.111 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:29.111 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:48:29.111 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:29.111 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:48:29.111 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:29.111 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:48:29.111 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:29.111 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:29.111 [ 0]:0x2 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=faa46f60d16e49b7996c26b4e6cc9019 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ faa46f60d16e49b7996c26b4e6cc9019 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:48:29.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:48:29.111 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:48:29.369 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:48:29.369 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1037529d-5b56-41aa-bc42-867388f457f7 -a 10.0.0.3 -s 4420 -i 4 00:48:29.626 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:48:29.626 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:48:29.626 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:48:29.626 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:48:29.626 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:48:29.626 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:48:31.561 [ 0]:0x1 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:48:31.561 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:31.820 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8845abc302e140868024bacaeefa79cc 00:48:31.820 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8845abc302e140868024bacaeefa79cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:31.820 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:48:31.820 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:48:31.820 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:31.820 [ 1]:0x2 00:48:31.820 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:48:31.820 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:31.820 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=faa46f60d16e49b7996c26b4e6cc9019 00:48:31.820 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ faa46f60d16e49b7996c26b4e6cc9019 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:31.820 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:48:32.079 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:48:32.079 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:48:32.079 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:48:32.079 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:48:32.079 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:32.079 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:48:32.079 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:32.080 [ 0]:0x2 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:32.080 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=faa46f60d16e49b7996c26b4e6cc9019 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ faa46f60d16e49b7996c26b4e6cc9019 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:48:32.080 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:48:32.338 [2024-12-09 09:59:39.242722] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:48:32.338 2024/12/09 09:59:39 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:48:32.338 request: 00:48:32.338 { 00:48:32.338 "method": "nvmf_ns_remove_host", 00:48:32.338 "params": { 00:48:32.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:32.338 "nsid": 2, 00:48:32.338 "host": "nqn.2016-06.io.spdk:host1" 00:48:32.338 } 00:48:32.338 } 00:48:32.338 Got JSON-RPC error response 00:48:32.338 GoRPCClient: error on JSON-RPC call 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:48:32.338 [ 0]:0x2 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:48:32.338 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=faa46f60d16e49b7996c26b4e6cc9019 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ faa46f60d16e49b7996c26b4e6cc9019 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:48:32.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76738 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76738 /var/tmp/host.sock 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 76738 ']' 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:48:32.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:32.597 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:48:32.597 [2024-12-09 09:59:39.508225] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:48:32.597 [2024-12-09 09:59:39.508319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76738 ] 00:48:32.856 [2024-12-09 09:59:39.657271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:32.856 [2024-12-09 09:59:39.719098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:33.792 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:33.792 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:48:33.792 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:48:33.792 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:48:34.050 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 153e0603-2542-4417-aa5d-b2707fcfd725 00:48:34.050 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:48:34.050 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 153E060325424417AA5DB2707FCFD725 -i 00:48:34.308 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a345ea9c-4aa6-4d29-9187-a249110a96b7 00:48:34.308 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:48:34.308 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A345EA9C4AA64D299187A249110A96B7 -i 00:48:34.566 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:48:35.133 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:48:35.465 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:48:35.465 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:48:35.724 nvme0n1 00:48:35.724 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:48:35.724 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:48:35.982 nvme1n2 00:48:35.982 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:48:35.982 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:48:35.982 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:48:35.982 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:48:35.982 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:48:36.241 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:48:36.242 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:48:36.242 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:48:36.242 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:48:36.808 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 153e0603-2542-4417-aa5d-b2707fcfd725 == \1\5\3\e\0\6\0\3\-\2\5\4\2\-\4\4\1\7\-\a\a\5\d\-\b\2\7\0\7\f\c\f\d\7\2\5 ]] 00:48:36.808 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:48:36.808 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:48:36.808 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:48:37.067 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a345ea9c-4aa6-4d29-9187-a249110a96b7 == \a\3\4\5\e\a\9\c\-\4\a\a\6\-\4\d\2\9\-\9\1\8\7\-\a\2\4\9\1\1\0\a\9\6\b\7 ]] 00:48:37.067 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:48:37.327 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 153e0603-2542-4417-aa5d-b2707fcfd725 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 153E060325424417AA5DB2707FCFD725 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 153E060325424417AA5DB2707FCFD725 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:48:37.587 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 153E060325424417AA5DB2707FCFD725 00:48:37.853 [2024-12-09 09:59:44.810559] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:48:37.853 [2024-12-09 09:59:44.810609] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:48:37.853 [2024-12-09 09:59:44.810620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:48:37.853 2024/12/09 09:59:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:153E060325424417AA5DB2707FCFD725 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:48:37.853 request: 00:48:37.853 { 00:48:37.853 "method": "nvmf_subsystem_add_ns", 00:48:37.853 "params": { 00:48:37.853 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:37.853 "namespace": { 00:48:37.853 "bdev_name": "invalid", 00:48:37.853 "nsid": 1, 00:48:37.853 "nguid": "153E060325424417AA5DB2707FCFD725", 00:48:37.853 "no_auto_visible": false, 00:48:37.853 "hide_metadata": false 00:48:37.853 } 00:48:37.853 } 00:48:37.853 } 00:48:37.853 Got JSON-RPC error response 00:48:37.853 GoRPCClient: error on JSON-RPC call 00:48:37.853 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:48:37.853 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:37.853 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:37.853 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:37.853 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 153e0603-2542-4417-aa5d-b2707fcfd725 00:48:37.853 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:48:37.853 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 153E060325424417AA5DB2707FCFD725 -i 00:48:38.113 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 76738 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 76738 ']' 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 76738 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76738 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:40.650 killing process with pid 76738 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76738' 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 76738 00:48:40.650 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 76738 00:48:40.908 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:41.167 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:48:41.167 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:48:41.167 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:41.167 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:41.427 rmmod nvme_tcp 00:48:41.427 rmmod nvme_fabrics 00:48:41.427 rmmod nvme_keyring 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 76356 ']' 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 76356 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 76356 ']' 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 76356 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76356 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76356' 00:48:41.427 killing process with pid 76356 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 76356 00:48:41.427 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 76356 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:48:41.687 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:48:41.947 00:48:41.947 real 0m22.559s 00:48:41.947 user 0m38.182s 00:48:41.947 sys 0m3.640s 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:48:41.947 ************************************ 00:48:41.947 END TEST nvmf_ns_masking 00:48:41.947 ************************************ 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:48:41.947 ************************************ 00:48:41.947 START TEST nvmf_auth_target 00:48:41.947 ************************************ 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:48:41.947 * Looking for test storage... 00:48:41.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:48:41.947 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:48:42.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:42.207 --rc genhtml_branch_coverage=1 00:48:42.207 --rc genhtml_function_coverage=1 00:48:42.207 --rc genhtml_legend=1 00:48:42.207 --rc geninfo_all_blocks=1 00:48:42.207 --rc geninfo_unexecuted_blocks=1 00:48:42.207 00:48:42.207 ' 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:48:42.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:42.207 --rc genhtml_branch_coverage=1 00:48:42.207 --rc genhtml_function_coverage=1 00:48:42.207 --rc genhtml_legend=1 00:48:42.207 --rc geninfo_all_blocks=1 00:48:42.207 --rc geninfo_unexecuted_blocks=1 00:48:42.207 00:48:42.207 ' 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:48:42.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:42.207 --rc genhtml_branch_coverage=1 00:48:42.207 --rc genhtml_function_coverage=1 00:48:42.207 --rc genhtml_legend=1 00:48:42.207 --rc geninfo_all_blocks=1 00:48:42.207 --rc geninfo_unexecuted_blocks=1 00:48:42.207 00:48:42.207 ' 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:48:42.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:42.207 --rc genhtml_branch_coverage=1 00:48:42.207 --rc genhtml_function_coverage=1 00:48:42.207 --rc genhtml_legend=1 00:48:42.207 --rc geninfo_all_blocks=1 00:48:42.207 --rc geninfo_unexecuted_blocks=1 00:48:42.207 00:48:42.207 ' 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:42.207 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:42.208 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:48:42.208 Cannot find device "nvmf_init_br" 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:48:42.208 Cannot find device "nvmf_init_br2" 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:48:42.208 Cannot find device "nvmf_tgt_br" 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:48:42.208 Cannot find device "nvmf_tgt_br2" 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:48:42.208 Cannot find device "nvmf_init_br" 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:48:42.208 Cannot find device "nvmf_init_br2" 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:48:42.208 Cannot find device "nvmf_tgt_br" 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:48:42.208 Cannot find device "nvmf_tgt_br2" 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:48:42.208 Cannot find device "nvmf_br" 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:48:42.208 Cannot find device "nvmf_init_if" 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:48:42.208 Cannot find device "nvmf_init_if2" 00:48:42.208 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:48:42.209 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:42.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:42.209 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:48:42.209 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:42.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:42.209 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:48:42.209 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:48:42.209 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:48:42.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:42.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:48:42.468 00:48:42.468 --- 10.0.0.3 ping statistics --- 00:48:42.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:42.468 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:48:42.468 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:48:42.468 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:48:42.468 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:48:42.468 00:48:42.468 --- 10.0.0.4 ping statistics --- 00:48:42.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:42.468 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:42.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:42.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:48:42.469 00:48:42.469 --- 10.0.0.1 ping statistics --- 00:48:42.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:42.469 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:48:42.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:42.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:48:42.469 00:48:42.469 --- 10.0.0.2 ping statistics --- 00:48:42.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:42.469 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=77238 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 77238 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 77238 ']' 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:42.469 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=77282 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cf613b4a26477b3bc896b99531ed724ad5863624c9fafa72 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rUD 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cf613b4a26477b3bc896b99531ed724ad5863624c9fafa72 0 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cf613b4a26477b3bc896b99531ed724ad5863624c9fafa72 0 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cf613b4a26477b3bc896b99531ed724ad5863624c9fafa72 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rUD 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rUD 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.rUD 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f9eb4a38dfd02f50320d638d30c9362621920b25b0b385b51154da471b355b50 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.O5g 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f9eb4a38dfd02f50320d638d30c9362621920b25b0b385b51154da471b355b50 3 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f9eb4a38dfd02f50320d638d30c9362621920b25b0b385b51154da471b355b50 3 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f9eb4a38dfd02f50320d638d30c9362621920b25b0b385b51154da471b355b50 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.O5g 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.O5g 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.O5g 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=aac4e1b7f62b092e4f8fd2a27ab9240a 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mYg 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key aac4e1b7f62b092e4f8fd2a27ab9240a 1 00:48:43.847 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 aac4e1b7f62b092e4f8fd2a27ab9240a 1 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=aac4e1b7f62b092e4f8fd2a27ab9240a 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mYg 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mYg 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.mYg 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=68338680f42a55dda483d21f96cffb62f345c55dc458c385 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.H0y 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 68338680f42a55dda483d21f96cffb62f345c55dc458c385 2 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 68338680f42a55dda483d21f96cffb62f345c55dc458c385 2 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=68338680f42a55dda483d21f96cffb62f345c55dc458c385 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:48:43.848 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.H0y 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.H0y 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.H0y 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c3b8459215c6ae5d9aafea38a65cb30b6ed4729d571a6492 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.pEv 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c3b8459215c6ae5d9aafea38a65cb30b6ed4729d571a6492 2 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c3b8459215c6ae5d9aafea38a65cb30b6ed4729d571a6492 2 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c3b8459215c6ae5d9aafea38a65cb30b6ed4729d571a6492 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.pEv 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.pEv 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.pEv 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:48:44.107 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=85b1204edf3277a4c20a1eca2652fa50 00:48:44.108 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Fct 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 85b1204edf3277a4c20a1eca2652fa50 1 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 85b1204edf3277a4c20a1eca2652fa50 1 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=85b1204edf3277a4c20a1eca2652fa50 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Fct 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Fct 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Fct 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f1f0b8b642b7df19c76340a738aa95afe84612763a1ee439e4766fd39390eda1 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3iI 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f1f0b8b642b7df19c76340a738aa95afe84612763a1ee439e4766fd39390eda1 3 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f1f0b8b642b7df19c76340a738aa95afe84612763a1ee439e4766fd39390eda1 3 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f1f0b8b642b7df19c76340a738aa95afe84612763a1ee439e4766fd39390eda1 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3iI 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3iI 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.3iI 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 77238 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 77238 ']' 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:44.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:44.108 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 77282 /var/tmp/host.sock 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 77282 ']' 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:44.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rUD 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.rUD 00:48:44.675 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.rUD 00:48:45.007 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.O5g ]] 00:48:45.007 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O5g 00:48:45.007 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:45.007 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:45.282 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:45.282 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O5g 00:48:45.282 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O5g 00:48:45.282 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:48:45.282 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.mYg 00:48:45.282 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:45.282 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:45.282 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:45.282 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.mYg 00:48:45.282 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.mYg 00:48:45.541 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.H0y ]] 00:48:45.541 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.H0y 00:48:45.541 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:45.541 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:45.800 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:45.800 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.H0y 00:48:45.800 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.H0y 00:48:46.059 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:48:46.059 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.pEv 00:48:46.059 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:46.059 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:46.059 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:46.059 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.pEv 00:48:46.059 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.pEv 00:48:46.319 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Fct ]] 00:48:46.319 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fct 00:48:46.319 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:46.319 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:46.319 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:46.319 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fct 00:48:46.319 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fct 00:48:46.579 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:48:46.579 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.3iI 00:48:46.579 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:46.579 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:46.579 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:46.579 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.3iI 00:48:46.579 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.3iI 00:48:46.838 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:48:46.838 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:48:46.838 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:48:46.838 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:48:46.838 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:48:46.838 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:48:47.096 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:48:47.355 00:48:47.355 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:48:47.355 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:48:47.355 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:48:47.614 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:47.614 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:48:47.614 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:47.614 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:47.614 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:47.614 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:48:47.614 { 00:48:47.614 "auth": { 00:48:47.614 "dhgroup": "null", 00:48:47.614 "digest": "sha256", 00:48:47.614 "state": "completed" 00:48:47.614 }, 00:48:47.614 "cntlid": 1, 00:48:47.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:48:47.614 "listen_address": { 00:48:47.614 "adrfam": "IPv4", 00:48:47.614 "traddr": "10.0.0.3", 00:48:47.614 "trsvcid": "4420", 00:48:47.614 "trtype": "TCP" 00:48:47.614 }, 00:48:47.614 "peer_address": { 00:48:47.614 "adrfam": "IPv4", 00:48:47.614 "traddr": "10.0.0.1", 00:48:47.614 "trsvcid": "43870", 00:48:47.614 "trtype": "TCP" 00:48:47.614 }, 00:48:47.614 "qid": 0, 00:48:47.614 "state": "enabled", 00:48:47.614 "thread": "nvmf_tgt_poll_group_000" 00:48:47.614 } 00:48:47.614 ]' 00:48:47.614 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:48:47.872 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:48:47.872 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:48:47.872 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:48:47.872 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:48:47.872 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:48:47.872 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:48:47.872 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:48:48.441 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:48:48.441 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:48:52.626 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:48:52.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:48:52.626 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:52.626 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:52.626 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:52.626 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:52.626 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:48:52.626 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:48:52.626 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:48:52.886 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:48:53.146 00:48:53.146 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:48:53.146 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:48:53.146 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:48:53.404 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:53.404 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:48:53.404 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:53.404 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:53.404 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:53.404 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:48:53.404 { 00:48:53.404 "auth": { 00:48:53.404 "dhgroup": "null", 00:48:53.404 "digest": "sha256", 00:48:53.404 "state": "completed" 00:48:53.404 }, 00:48:53.404 "cntlid": 3, 00:48:53.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:48:53.404 "listen_address": { 00:48:53.404 "adrfam": "IPv4", 00:48:53.404 "traddr": "10.0.0.3", 00:48:53.404 "trsvcid": "4420", 00:48:53.404 "trtype": "TCP" 00:48:53.404 }, 00:48:53.404 "peer_address": { 00:48:53.404 "adrfam": "IPv4", 00:48:53.404 "traddr": "10.0.0.1", 00:48:53.404 "trsvcid": "44582", 00:48:53.404 "trtype": "TCP" 00:48:53.404 }, 00:48:53.404 "qid": 0, 00:48:53.404 "state": "enabled", 00:48:53.404 "thread": "nvmf_tgt_poll_group_000" 00:48:53.404 } 00:48:53.404 ]' 00:48:53.404 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:48:53.404 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:48:53.404 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:48:53.404 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:48:53.404 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:48:53.662 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:48:53.662 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:48:53.663 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:48:53.921 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:48:53.921 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:48:54.490 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:48:54.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:48:54.490 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:54.490 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:54.490 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:54.490 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:54.490 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:48:54.490 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:48:54.490 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:48:54.748 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:48:55.006 00:48:55.006 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:48:55.006 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:48:55.006 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:48:55.265 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:55.265 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:48:55.265 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:55.265 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:55.265 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:55.265 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:48:55.265 { 00:48:55.265 "auth": { 00:48:55.265 "dhgroup": "null", 00:48:55.265 "digest": "sha256", 00:48:55.265 "state": "completed" 00:48:55.265 }, 00:48:55.265 "cntlid": 5, 00:48:55.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:48:55.265 "listen_address": { 00:48:55.265 "adrfam": "IPv4", 00:48:55.265 "traddr": "10.0.0.3", 00:48:55.265 "trsvcid": "4420", 00:48:55.265 "trtype": "TCP" 00:48:55.265 }, 00:48:55.265 "peer_address": { 00:48:55.265 "adrfam": "IPv4", 00:48:55.265 "traddr": "10.0.0.1", 00:48:55.265 "trsvcid": "44620", 00:48:55.265 "trtype": "TCP" 00:48:55.265 }, 00:48:55.265 "qid": 0, 00:48:55.265 "state": "enabled", 00:48:55.265 "thread": "nvmf_tgt_poll_group_000" 00:48:55.265 } 00:48:55.265 ]' 00:48:55.265 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:48:55.545 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:48:55.545 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:48:55.545 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:48:55.545 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:48:55.545 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:48:55.545 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:48:55.545 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:48:55.804 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:48:55.804 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:48:56.372 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:48:56.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:48:56.373 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:56.373 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:56.373 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:56.373 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:56.373 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:48:56.373 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:48:56.373 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:48:56.632 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:48:56.892 00:48:56.892 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:48:56.892 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:48:56.892 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:48:57.151 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:57.151 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:48:57.151 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:57.151 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:57.151 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:57.151 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:48:57.151 { 00:48:57.151 "auth": { 00:48:57.151 "dhgroup": "null", 00:48:57.151 "digest": "sha256", 00:48:57.151 "state": "completed" 00:48:57.151 }, 00:48:57.151 "cntlid": 7, 00:48:57.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:48:57.151 "listen_address": { 00:48:57.151 "adrfam": "IPv4", 00:48:57.151 "traddr": "10.0.0.3", 00:48:57.151 "trsvcid": "4420", 00:48:57.151 "trtype": "TCP" 00:48:57.151 }, 00:48:57.151 "peer_address": { 00:48:57.151 "adrfam": "IPv4", 00:48:57.151 "traddr": "10.0.0.1", 00:48:57.151 "trsvcid": "44660", 00:48:57.151 "trtype": "TCP" 00:48:57.151 }, 00:48:57.151 "qid": 0, 00:48:57.151 "state": "enabled", 00:48:57.151 "thread": "nvmf_tgt_poll_group_000" 00:48:57.151 } 00:48:57.151 ]' 00:48:57.151 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:48:57.151 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:48:57.151 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:48:57.411 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:48:57.411 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:48:57.411 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:48:57.411 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:48:57.411 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:48:57.670 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:48:57.670 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:48:58.247 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:48:58.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:48:58.247 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:48:58.247 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:58.247 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:58.247 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:58.247 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:48:58.247 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:48:58.247 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:48:58.247 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:48:58.507 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:48:58.765 00:48:58.765 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:48:58.765 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:48:58.765 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:48:59.024 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:59.024 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:48:59.024 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:59.024 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:48:59.024 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:59.024 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:48:59.024 { 00:48:59.024 "auth": { 00:48:59.024 "dhgroup": "ffdhe2048", 00:48:59.024 "digest": "sha256", 00:48:59.024 "state": "completed" 00:48:59.024 }, 00:48:59.024 "cntlid": 9, 00:48:59.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:48:59.024 "listen_address": { 00:48:59.024 "adrfam": "IPv4", 00:48:59.024 "traddr": "10.0.0.3", 00:48:59.024 "trsvcid": "4420", 00:48:59.024 "trtype": "TCP" 00:48:59.024 }, 00:48:59.024 "peer_address": { 00:48:59.024 "adrfam": "IPv4", 00:48:59.024 "traddr": "10.0.0.1", 00:48:59.025 "trsvcid": "44686", 00:48:59.025 "trtype": "TCP" 00:48:59.025 }, 00:48:59.025 "qid": 0, 00:48:59.025 "state": "enabled", 00:48:59.025 "thread": "nvmf_tgt_poll_group_000" 00:48:59.025 } 00:48:59.025 ]' 00:48:59.025 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:48:59.025 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:48:59.025 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:48:59.025 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:48:59.025 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:48:59.283 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:48:59.283 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:48:59.283 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:48:59.542 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:48:59.542 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:00.109 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:00.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:00.109 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:00.109 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:00.109 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:00.109 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:00.109 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:00.109 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:49:00.109 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:49:00.109 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:49:00.109 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:00.109 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:00.109 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:49:00.109 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:49:00.109 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:00.109 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:00.109 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:00.109 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:00.369 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:00.369 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:00.369 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:00.369 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:00.629 00:49:00.629 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:00.629 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:00.629 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:00.890 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:00.890 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:00.891 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:00.891 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:00.891 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:00.891 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:00.891 { 00:49:00.891 "auth": { 00:49:00.891 "dhgroup": "ffdhe2048", 00:49:00.891 "digest": "sha256", 00:49:00.891 "state": "completed" 00:49:00.891 }, 00:49:00.891 "cntlid": 11, 00:49:00.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:00.891 "listen_address": { 00:49:00.891 "adrfam": "IPv4", 00:49:00.891 "traddr": "10.0.0.3", 00:49:00.891 "trsvcid": "4420", 00:49:00.891 "trtype": "TCP" 00:49:00.891 }, 00:49:00.891 "peer_address": { 00:49:00.891 "adrfam": "IPv4", 00:49:00.891 "traddr": "10.0.0.1", 00:49:00.891 "trsvcid": "52274", 00:49:00.891 "trtype": "TCP" 00:49:00.891 }, 00:49:00.891 "qid": 0, 00:49:00.891 "state": "enabled", 00:49:00.891 "thread": "nvmf_tgt_poll_group_000" 00:49:00.891 } 00:49:00.891 ]' 00:49:00.891 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:00.891 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:00.891 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:00.891 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:49:00.891 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:00.891 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:00.891 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:00.891 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:01.155 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:01.155 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:01.725 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:01.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:01.725 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:01.725 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:01.725 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:01.725 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:01.725 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:01.725 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:49:01.725 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:01.985 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:02.243 00:49:02.502 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:02.502 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:02.502 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:02.502 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:02.502 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:02.502 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:02.502 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:02.502 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:02.502 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:02.502 { 00:49:02.502 "auth": { 00:49:02.502 "dhgroup": "ffdhe2048", 00:49:02.502 "digest": "sha256", 00:49:02.502 "state": "completed" 00:49:02.502 }, 00:49:02.502 "cntlid": 13, 00:49:02.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:02.502 "listen_address": { 00:49:02.502 "adrfam": "IPv4", 00:49:02.502 "traddr": "10.0.0.3", 00:49:02.502 "trsvcid": "4420", 00:49:02.502 "trtype": "TCP" 00:49:02.502 }, 00:49:02.502 "peer_address": { 00:49:02.502 "adrfam": "IPv4", 00:49:02.502 "traddr": "10.0.0.1", 00:49:02.502 "trsvcid": "52302", 00:49:02.502 "trtype": "TCP" 00:49:02.502 }, 00:49:02.502 "qid": 0, 00:49:02.502 "state": "enabled", 00:49:02.502 "thread": "nvmf_tgt_poll_group_000" 00:49:02.502 } 00:49:02.502 ]' 00:49:02.760 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:02.760 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:02.760 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:02.760 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:49:02.760 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:02.760 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:02.760 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:02.760 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:03.018 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:03.018 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:03.585 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:03.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:03.585 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:03.585 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:03.585 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:03.585 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:03.585 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:03.585 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:49:03.585 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:03.844 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:04.437 00:49:04.437 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:04.437 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:04.437 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:04.696 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:04.696 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:04.696 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:04.696 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:04.696 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:04.696 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:04.696 { 00:49:04.696 "auth": { 00:49:04.696 "dhgroup": "ffdhe2048", 00:49:04.696 "digest": "sha256", 00:49:04.696 "state": "completed" 00:49:04.696 }, 00:49:04.696 "cntlid": 15, 00:49:04.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:04.696 "listen_address": { 00:49:04.696 "adrfam": "IPv4", 00:49:04.696 "traddr": "10.0.0.3", 00:49:04.696 "trsvcid": "4420", 00:49:04.696 "trtype": "TCP" 00:49:04.696 }, 00:49:04.696 "peer_address": { 00:49:04.696 "adrfam": "IPv4", 00:49:04.696 "traddr": "10.0.0.1", 00:49:04.696 "trsvcid": "52330", 00:49:04.696 "trtype": "TCP" 00:49:04.696 }, 00:49:04.696 "qid": 0, 00:49:04.696 "state": "enabled", 00:49:04.696 "thread": "nvmf_tgt_poll_group_000" 00:49:04.696 } 00:49:04.696 ]' 00:49:04.696 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:04.696 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:04.696 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:04.696 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:49:04.696 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:04.696 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:04.697 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:04.697 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:04.955 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:04.955 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:05.522 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:05.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:05.781 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:05.781 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:05.781 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:05.781 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:05.781 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:49:05.781 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:05.781 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:05.781 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:06.041 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:49:06.041 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:06.041 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:06.041 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:49:06.041 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:49:06.041 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:06.041 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:06.041 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:06.041 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:06.041 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:06.041 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:06.041 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:06.042 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:06.301 00:49:06.301 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:06.301 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:06.301 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:06.562 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:06.562 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:06.562 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:06.562 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:06.562 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:06.562 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:06.562 { 00:49:06.562 "auth": { 00:49:06.562 "dhgroup": "ffdhe3072", 00:49:06.562 "digest": "sha256", 00:49:06.562 "state": "completed" 00:49:06.562 }, 00:49:06.562 "cntlid": 17, 00:49:06.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:06.562 "listen_address": { 00:49:06.562 "adrfam": "IPv4", 00:49:06.562 "traddr": "10.0.0.3", 00:49:06.562 "trsvcid": "4420", 00:49:06.562 "trtype": "TCP" 00:49:06.562 }, 00:49:06.562 "peer_address": { 00:49:06.562 "adrfam": "IPv4", 00:49:06.562 "traddr": "10.0.0.1", 00:49:06.562 "trsvcid": "52348", 00:49:06.562 "trtype": "TCP" 00:49:06.562 }, 00:49:06.562 "qid": 0, 00:49:06.562 "state": "enabled", 00:49:06.562 "thread": "nvmf_tgt_poll_group_000" 00:49:06.562 } 00:49:06.562 ]' 00:49:06.562 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:06.562 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:06.562 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:06.562 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:49:06.562 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:06.821 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:06.821 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:06.821 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:07.082 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:07.082 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:07.657 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:07.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:07.657 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:07.657 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:07.657 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:07.657 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:07.657 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:07.657 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:07.657 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:07.927 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:08.186 00:49:08.186 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:08.186 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:08.186 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:08.445 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:08.445 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:08.445 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:08.445 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:08.445 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:08.445 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:08.445 { 00:49:08.445 "auth": { 00:49:08.445 "dhgroup": "ffdhe3072", 00:49:08.445 "digest": "sha256", 00:49:08.445 "state": "completed" 00:49:08.445 }, 00:49:08.445 "cntlid": 19, 00:49:08.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:08.445 "listen_address": { 00:49:08.445 "adrfam": "IPv4", 00:49:08.445 "traddr": "10.0.0.3", 00:49:08.445 "trsvcid": "4420", 00:49:08.445 "trtype": "TCP" 00:49:08.445 }, 00:49:08.445 "peer_address": { 00:49:08.445 "adrfam": "IPv4", 00:49:08.445 "traddr": "10.0.0.1", 00:49:08.445 "trsvcid": "52382", 00:49:08.445 "trtype": "TCP" 00:49:08.445 }, 00:49:08.445 "qid": 0, 00:49:08.445 "state": "enabled", 00:49:08.445 "thread": "nvmf_tgt_poll_group_000" 00:49:08.445 } 00:49:08.445 ]' 00:49:08.445 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:08.445 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:08.446 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:08.446 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:49:08.446 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:08.705 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:08.705 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:08.705 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:08.705 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:08.705 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:09.641 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:09.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:09.641 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:09.641 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:09.641 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:09.641 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:09.641 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:09.641 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:09.641 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:09.899 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:10.159 00:49:10.159 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:10.159 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:10.159 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:10.418 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:10.418 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:10.418 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:10.418 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:10.418 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:10.418 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:10.418 { 00:49:10.418 "auth": { 00:49:10.418 "dhgroup": "ffdhe3072", 00:49:10.418 "digest": "sha256", 00:49:10.418 "state": "completed" 00:49:10.418 }, 00:49:10.418 "cntlid": 21, 00:49:10.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:10.418 "listen_address": { 00:49:10.418 "adrfam": "IPv4", 00:49:10.418 "traddr": "10.0.0.3", 00:49:10.418 "trsvcid": "4420", 00:49:10.418 "trtype": "TCP" 00:49:10.418 }, 00:49:10.418 "peer_address": { 00:49:10.418 "adrfam": "IPv4", 00:49:10.418 "traddr": "10.0.0.1", 00:49:10.418 "trsvcid": "40374", 00:49:10.418 "trtype": "TCP" 00:49:10.418 }, 00:49:10.418 "qid": 0, 00:49:10.418 "state": "enabled", 00:49:10.418 "thread": "nvmf_tgt_poll_group_000" 00:49:10.418 } 00:49:10.418 ]' 00:49:10.418 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:10.418 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:10.418 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:10.418 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:49:10.418 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:10.677 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:10.677 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:10.677 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:10.937 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:10.937 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:11.507 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:11.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:11.507 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:11.507 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.507 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:11.507 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.507 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:11.507 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:11.507 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:11.768 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:12.035 00:49:12.035 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:12.035 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:12.035 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:12.319 { 00:49:12.319 "auth": { 00:49:12.319 "dhgroup": "ffdhe3072", 00:49:12.319 "digest": "sha256", 00:49:12.319 "state": "completed" 00:49:12.319 }, 00:49:12.319 "cntlid": 23, 00:49:12.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:12.319 "listen_address": { 00:49:12.319 "adrfam": "IPv4", 00:49:12.319 "traddr": "10.0.0.3", 00:49:12.319 "trsvcid": "4420", 00:49:12.319 "trtype": "TCP" 00:49:12.319 }, 00:49:12.319 "peer_address": { 00:49:12.319 "adrfam": "IPv4", 00:49:12.319 "traddr": "10.0.0.1", 00:49:12.319 "trsvcid": "40390", 00:49:12.319 "trtype": "TCP" 00:49:12.319 }, 00:49:12.319 "qid": 0, 00:49:12.319 "state": "enabled", 00:49:12.319 "thread": "nvmf_tgt_poll_group_000" 00:49:12.319 } 00:49:12.319 ]' 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:12.319 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:12.578 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:12.578 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:13.517 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:13.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:13.517 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:13.517 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:13.517 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:13.517 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:13.517 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:49:13.517 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:13.517 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:13.517 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:13.517 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:49:13.518 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:13.518 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:13.518 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:49:13.518 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:49:13.518 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:13.518 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:13.518 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:13.518 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:13.518 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:13.518 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:13.518 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:13.518 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:13.777 00:49:14.037 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:14.037 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:14.037 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:14.037 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:14.037 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:14.037 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:14.037 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:14.296 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.296 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:14.296 { 00:49:14.296 "auth": { 00:49:14.296 "dhgroup": "ffdhe4096", 00:49:14.296 "digest": "sha256", 00:49:14.296 "state": "completed" 00:49:14.296 }, 00:49:14.296 "cntlid": 25, 00:49:14.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:14.296 "listen_address": { 00:49:14.296 "adrfam": "IPv4", 00:49:14.296 "traddr": "10.0.0.3", 00:49:14.296 "trsvcid": "4420", 00:49:14.296 "trtype": "TCP" 00:49:14.296 }, 00:49:14.296 "peer_address": { 00:49:14.296 "adrfam": "IPv4", 00:49:14.296 "traddr": "10.0.0.1", 00:49:14.296 "trsvcid": "40428", 00:49:14.296 "trtype": "TCP" 00:49:14.296 }, 00:49:14.296 "qid": 0, 00:49:14.296 "state": "enabled", 00:49:14.296 "thread": "nvmf_tgt_poll_group_000" 00:49:14.296 } 00:49:14.296 ]' 00:49:14.296 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:14.296 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:14.296 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:14.296 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:49:14.296 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:14.296 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:14.296 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:14.296 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:14.556 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:14.556 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:15.125 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:15.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:15.125 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:15.125 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:15.125 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:15.125 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:15.125 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:15.125 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:15.125 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:15.385 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:15.645 00:49:15.645 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:15.645 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:15.645 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:15.904 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:15.904 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:15.904 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:15.904 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:16.164 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:16.164 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:16.164 { 00:49:16.164 "auth": { 00:49:16.164 "dhgroup": "ffdhe4096", 00:49:16.164 "digest": "sha256", 00:49:16.164 "state": "completed" 00:49:16.164 }, 00:49:16.164 "cntlid": 27, 00:49:16.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:16.164 "listen_address": { 00:49:16.164 "adrfam": "IPv4", 00:49:16.164 "traddr": "10.0.0.3", 00:49:16.164 "trsvcid": "4420", 00:49:16.164 "trtype": "TCP" 00:49:16.164 }, 00:49:16.164 "peer_address": { 00:49:16.164 "adrfam": "IPv4", 00:49:16.164 "traddr": "10.0.0.1", 00:49:16.164 "trsvcid": "40452", 00:49:16.164 "trtype": "TCP" 00:49:16.164 }, 00:49:16.164 "qid": 0, 00:49:16.164 "state": "enabled", 00:49:16.164 "thread": "nvmf_tgt_poll_group_000" 00:49:16.164 } 00:49:16.164 ]' 00:49:16.164 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:16.164 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:16.164 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:16.164 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:49:16.164 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:16.164 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:16.164 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:16.164 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:16.424 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:16.424 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:16.995 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:16.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:16.995 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:16.995 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:16.995 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:16.995 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:16.995 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:16.995 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:16.995 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:17.255 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:17.515 00:49:17.515 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:17.515 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:17.515 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:17.774 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:17.774 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:17.774 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:17.774 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:18.034 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.034 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:18.034 { 00:49:18.034 "auth": { 00:49:18.034 "dhgroup": "ffdhe4096", 00:49:18.034 "digest": "sha256", 00:49:18.034 "state": "completed" 00:49:18.034 }, 00:49:18.034 "cntlid": 29, 00:49:18.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:18.034 "listen_address": { 00:49:18.034 "adrfam": "IPv4", 00:49:18.034 "traddr": "10.0.0.3", 00:49:18.034 "trsvcid": "4420", 00:49:18.034 "trtype": "TCP" 00:49:18.034 }, 00:49:18.034 "peer_address": { 00:49:18.034 "adrfam": "IPv4", 00:49:18.034 "traddr": "10.0.0.1", 00:49:18.034 "trsvcid": "40478", 00:49:18.034 "trtype": "TCP" 00:49:18.034 }, 00:49:18.034 "qid": 0, 00:49:18.034 "state": "enabled", 00:49:18.034 "thread": "nvmf_tgt_poll_group_000" 00:49:18.034 } 00:49:18.034 ]' 00:49:18.034 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:18.034 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:18.034 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:18.034 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:49:18.034 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:18.034 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:18.034 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:18.034 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:18.292 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:18.292 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:18.860 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:18.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:18.860 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:18.860 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.860 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:18.860 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.860 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:18.860 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:18.860 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:49:19.119 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:49:19.119 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:19.119 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:19.119 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:49:19.119 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:49:19.119 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:19.120 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:49:19.120 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.120 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:19.120 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.120 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:49:19.120 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:19.120 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:19.378 00:49:19.637 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:19.637 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:19.637 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:19.637 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:19.637 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:19.637 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.637 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:19.637 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.637 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:19.637 { 00:49:19.637 "auth": { 00:49:19.637 "dhgroup": "ffdhe4096", 00:49:19.637 "digest": "sha256", 00:49:19.637 "state": "completed" 00:49:19.637 }, 00:49:19.637 "cntlid": 31, 00:49:19.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:19.637 "listen_address": { 00:49:19.637 "adrfam": "IPv4", 00:49:19.637 "traddr": "10.0.0.3", 00:49:19.637 "trsvcid": "4420", 00:49:19.637 "trtype": "TCP" 00:49:19.637 }, 00:49:19.637 "peer_address": { 00:49:19.637 "adrfam": "IPv4", 00:49:19.637 "traddr": "10.0.0.1", 00:49:19.637 "trsvcid": "40504", 00:49:19.637 "trtype": "TCP" 00:49:19.637 }, 00:49:19.637 "qid": 0, 00:49:19.637 "state": "enabled", 00:49:19.637 "thread": "nvmf_tgt_poll_group_000" 00:49:19.637 } 00:49:19.637 ]' 00:49:19.897 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:19.897 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:19.897 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:19.897 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:49:19.897 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:19.897 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:19.897 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:19.897 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:20.157 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:20.157 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:20.724 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:20.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:20.724 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:20.724 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.724 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:20.724 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.724 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:49:20.724 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:20.724 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:20.724 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:20.983 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:21.558 00:49:21.558 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:21.558 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:21.558 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:21.825 { 00:49:21.825 "auth": { 00:49:21.825 "dhgroup": "ffdhe6144", 00:49:21.825 "digest": "sha256", 00:49:21.825 "state": "completed" 00:49:21.825 }, 00:49:21.825 "cntlid": 33, 00:49:21.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:21.825 "listen_address": { 00:49:21.825 "adrfam": "IPv4", 00:49:21.825 "traddr": "10.0.0.3", 00:49:21.825 "trsvcid": "4420", 00:49:21.825 "trtype": "TCP" 00:49:21.825 }, 00:49:21.825 "peer_address": { 00:49:21.825 "adrfam": "IPv4", 00:49:21.825 "traddr": "10.0.0.1", 00:49:21.825 "trsvcid": "37506", 00:49:21.825 "trtype": "TCP" 00:49:21.825 }, 00:49:21.825 "qid": 0, 00:49:21.825 "state": "enabled", 00:49:21.825 "thread": "nvmf_tgt_poll_group_000" 00:49:21.825 } 00:49:21.825 ]' 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:21.825 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:22.085 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:22.085 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:22.653 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:22.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:22.653 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:22.653 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.653 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:22.653 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.653 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:22.653 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:22.653 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:22.912 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:49:22.912 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:22.912 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:22.912 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:49:22.912 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:49:22.912 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:22.913 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:22.913 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:22.913 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:22.913 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:22.913 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:22.913 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:22.913 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:23.481 00:49:23.481 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:23.481 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:23.481 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:23.741 { 00:49:23.741 "auth": { 00:49:23.741 "dhgroup": "ffdhe6144", 00:49:23.741 "digest": "sha256", 00:49:23.741 "state": "completed" 00:49:23.741 }, 00:49:23.741 "cntlid": 35, 00:49:23.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:23.741 "listen_address": { 00:49:23.741 "adrfam": "IPv4", 00:49:23.741 "traddr": "10.0.0.3", 00:49:23.741 "trsvcid": "4420", 00:49:23.741 "trtype": "TCP" 00:49:23.741 }, 00:49:23.741 "peer_address": { 00:49:23.741 "adrfam": "IPv4", 00:49:23.741 "traddr": "10.0.0.1", 00:49:23.741 "trsvcid": "37530", 00:49:23.741 "trtype": "TCP" 00:49:23.741 }, 00:49:23.741 "qid": 0, 00:49:23.741 "state": "enabled", 00:49:23.741 "thread": "nvmf_tgt_poll_group_000" 00:49:23.741 } 00:49:23.741 ]' 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:23.741 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:24.001 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:24.001 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:24.569 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:24.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:24.827 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:24.828 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:25.087 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:25.087 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:25.347 00:49:25.347 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:25.347 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:25.347 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:25.607 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:25.607 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:25.607 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:25.607 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:25.607 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:25.607 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:25.607 { 00:49:25.607 "auth": { 00:49:25.607 "dhgroup": "ffdhe6144", 00:49:25.607 "digest": "sha256", 00:49:25.607 "state": "completed" 00:49:25.607 }, 00:49:25.607 "cntlid": 37, 00:49:25.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:25.607 "listen_address": { 00:49:25.607 "adrfam": "IPv4", 00:49:25.607 "traddr": "10.0.0.3", 00:49:25.607 "trsvcid": "4420", 00:49:25.607 "trtype": "TCP" 00:49:25.607 }, 00:49:25.607 "peer_address": { 00:49:25.607 "adrfam": "IPv4", 00:49:25.607 "traddr": "10.0.0.1", 00:49:25.607 "trsvcid": "37562", 00:49:25.607 "trtype": "TCP" 00:49:25.607 }, 00:49:25.607 "qid": 0, 00:49:25.607 "state": "enabled", 00:49:25.607 "thread": "nvmf_tgt_poll_group_000" 00:49:25.607 } 00:49:25.607 ]' 00:49:25.607 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:25.607 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:25.607 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:25.867 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:49:25.867 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:25.867 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:25.867 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:25.867 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:26.126 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:26.126 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:26.703 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:26.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:26.703 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:26.703 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.703 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:26.703 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.703 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:26.703 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:26.703 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:26.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:27.531 00:49:27.531 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:27.531 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:27.532 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:27.790 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:27.790 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:27.790 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:27.790 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:27.790 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:27.790 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:27.790 { 00:49:27.790 "auth": { 00:49:27.790 "dhgroup": "ffdhe6144", 00:49:27.790 "digest": "sha256", 00:49:27.790 "state": "completed" 00:49:27.790 }, 00:49:27.790 "cntlid": 39, 00:49:27.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:27.790 "listen_address": { 00:49:27.790 "adrfam": "IPv4", 00:49:27.790 "traddr": "10.0.0.3", 00:49:27.790 "trsvcid": "4420", 00:49:27.790 "trtype": "TCP" 00:49:27.790 }, 00:49:27.790 "peer_address": { 00:49:27.791 "adrfam": "IPv4", 00:49:27.791 "traddr": "10.0.0.1", 00:49:27.791 "trsvcid": "37590", 00:49:27.791 "trtype": "TCP" 00:49:27.791 }, 00:49:27.791 "qid": 0, 00:49:27.791 "state": "enabled", 00:49:27.791 "thread": "nvmf_tgt_poll_group_000" 00:49:27.791 } 00:49:27.791 ]' 00:49:27.791 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:27.791 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:27.791 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:27.791 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:49:27.791 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:27.791 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:27.791 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:27.791 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:28.050 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:28.050 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:28.616 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:28.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:28.616 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:28.616 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:28.616 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:28.876 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:29.134 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.134 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:29.135 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:29.135 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:29.702 00:49:29.702 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:29.702 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:29.702 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:29.960 { 00:49:29.960 "auth": { 00:49:29.960 "dhgroup": "ffdhe8192", 00:49:29.960 "digest": "sha256", 00:49:29.960 "state": "completed" 00:49:29.960 }, 00:49:29.960 "cntlid": 41, 00:49:29.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:29.960 "listen_address": { 00:49:29.960 "adrfam": "IPv4", 00:49:29.960 "traddr": "10.0.0.3", 00:49:29.960 "trsvcid": "4420", 00:49:29.960 "trtype": "TCP" 00:49:29.960 }, 00:49:29.960 "peer_address": { 00:49:29.960 "adrfam": "IPv4", 00:49:29.960 "traddr": "10.0.0.1", 00:49:29.960 "trsvcid": "37606", 00:49:29.960 "trtype": "TCP" 00:49:29.960 }, 00:49:29.960 "qid": 0, 00:49:29.960 "state": "enabled", 00:49:29.960 "thread": "nvmf_tgt_poll_group_000" 00:49:29.960 } 00:49:29.960 ]' 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:29.960 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:30.248 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:30.248 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:31.186 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:31.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:31.186 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:31.186 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.186 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:31.186 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.186 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:31.186 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:31.186 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:31.186 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:31.753 00:49:32.012 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:32.012 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:32.012 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:32.012 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:32.272 { 00:49:32.272 "auth": { 00:49:32.272 "dhgroup": "ffdhe8192", 00:49:32.272 "digest": "sha256", 00:49:32.272 "state": "completed" 00:49:32.272 }, 00:49:32.272 "cntlid": 43, 00:49:32.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:32.272 "listen_address": { 00:49:32.272 "adrfam": "IPv4", 00:49:32.272 "traddr": "10.0.0.3", 00:49:32.272 "trsvcid": "4420", 00:49:32.272 "trtype": "TCP" 00:49:32.272 }, 00:49:32.272 "peer_address": { 00:49:32.272 "adrfam": "IPv4", 00:49:32.272 "traddr": "10.0.0.1", 00:49:32.272 "trsvcid": "42434", 00:49:32.272 "trtype": "TCP" 00:49:32.272 }, 00:49:32.272 "qid": 0, 00:49:32.272 "state": "enabled", 00:49:32.272 "thread": "nvmf_tgt_poll_group_000" 00:49:32.272 } 00:49:32.272 ]' 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:32.272 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:32.531 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:32.531 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:33.100 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:33.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:33.100 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:33.100 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:33.100 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:33.100 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:33.100 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:33.100 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:33.100 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:33.359 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:49:33.360 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:33.360 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:33.360 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:49:33.360 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:49:33.360 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:33.360 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:33.360 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:33.360 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:33.360 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:33.360 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:33.360 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:33.360 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:33.928 00:49:34.187 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:34.187 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:34.187 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:34.187 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:34.187 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:34.188 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:34.188 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:34.188 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:34.188 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:34.188 { 00:49:34.188 "auth": { 00:49:34.188 "dhgroup": "ffdhe8192", 00:49:34.188 "digest": "sha256", 00:49:34.188 "state": "completed" 00:49:34.188 }, 00:49:34.188 "cntlid": 45, 00:49:34.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:34.188 "listen_address": { 00:49:34.188 "adrfam": "IPv4", 00:49:34.188 "traddr": "10.0.0.3", 00:49:34.188 "trsvcid": "4420", 00:49:34.188 "trtype": "TCP" 00:49:34.188 }, 00:49:34.188 "peer_address": { 00:49:34.188 "adrfam": "IPv4", 00:49:34.188 "traddr": "10.0.0.1", 00:49:34.188 "trsvcid": "42458", 00:49:34.188 "trtype": "TCP" 00:49:34.188 }, 00:49:34.188 "qid": 0, 00:49:34.188 "state": "enabled", 00:49:34.188 "thread": "nvmf_tgt_poll_group_000" 00:49:34.188 } 00:49:34.188 ]' 00:49:34.188 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:34.447 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:34.447 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:34.447 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:49:34.447 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:34.447 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:34.447 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:34.447 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:34.707 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:34.707 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:35.286 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:35.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:35.286 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:35.286 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:35.286 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:35.286 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:35.286 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:35.286 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:35.286 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:35.546 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:36.112 00:49:36.112 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:36.112 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:36.112 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:36.370 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:36.370 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:36.370 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:36.370 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:36.370 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:36.370 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:36.370 { 00:49:36.370 "auth": { 00:49:36.370 "dhgroup": "ffdhe8192", 00:49:36.370 "digest": "sha256", 00:49:36.370 "state": "completed" 00:49:36.370 }, 00:49:36.370 "cntlid": 47, 00:49:36.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:36.370 "listen_address": { 00:49:36.370 "adrfam": "IPv4", 00:49:36.370 "traddr": "10.0.0.3", 00:49:36.370 "trsvcid": "4420", 00:49:36.370 "trtype": "TCP" 00:49:36.370 }, 00:49:36.370 "peer_address": { 00:49:36.370 "adrfam": "IPv4", 00:49:36.370 "traddr": "10.0.0.1", 00:49:36.370 "trsvcid": "42490", 00:49:36.370 "trtype": "TCP" 00:49:36.370 }, 00:49:36.370 "qid": 0, 00:49:36.370 "state": "enabled", 00:49:36.370 "thread": "nvmf_tgt_poll_group_000" 00:49:36.370 } 00:49:36.370 ]' 00:49:36.370 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:36.370 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:49:36.370 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:36.370 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:49:36.370 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:36.629 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:36.629 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:36.629 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:36.887 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:36.887 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:37.453 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:37.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:37.453 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:37.453 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.453 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:37.453 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:37.453 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:49:37.453 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:49:37.453 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:37.453 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:49:37.453 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:37.721 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:37.989 00:49:37.989 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:37.989 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:37.989 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:38.246 { 00:49:38.246 "auth": { 00:49:38.246 "dhgroup": "null", 00:49:38.246 "digest": "sha384", 00:49:38.246 "state": "completed" 00:49:38.246 }, 00:49:38.246 "cntlid": 49, 00:49:38.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:38.246 "listen_address": { 00:49:38.246 "adrfam": "IPv4", 00:49:38.246 "traddr": "10.0.0.3", 00:49:38.246 "trsvcid": "4420", 00:49:38.246 "trtype": "TCP" 00:49:38.246 }, 00:49:38.246 "peer_address": { 00:49:38.246 "adrfam": "IPv4", 00:49:38.246 "traddr": "10.0.0.1", 00:49:38.246 "trsvcid": "42524", 00:49:38.246 "trtype": "TCP" 00:49:38.246 }, 00:49:38.246 "qid": 0, 00:49:38.246 "state": "enabled", 00:49:38.246 "thread": "nvmf_tgt_poll_group_000" 00:49:38.246 } 00:49:38.246 ]' 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:38.246 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:38.504 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:38.504 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:39.071 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:39.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:39.071 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:39.071 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:39.071 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:39.071 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:39.071 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:39.071 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:49:39.071 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:39.329 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:39.587 00:49:39.844 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:39.844 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:39.844 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:40.101 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:40.101 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:40.101 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:40.101 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:40.101 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:40.101 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:40.101 { 00:49:40.101 "auth": { 00:49:40.101 "dhgroup": "null", 00:49:40.101 "digest": "sha384", 00:49:40.101 "state": "completed" 00:49:40.101 }, 00:49:40.101 "cntlid": 51, 00:49:40.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:40.101 "listen_address": { 00:49:40.101 "adrfam": "IPv4", 00:49:40.101 "traddr": "10.0.0.3", 00:49:40.101 "trsvcid": "4420", 00:49:40.101 "trtype": "TCP" 00:49:40.102 }, 00:49:40.102 "peer_address": { 00:49:40.102 "adrfam": "IPv4", 00:49:40.102 "traddr": "10.0.0.1", 00:49:40.102 "trsvcid": "50990", 00:49:40.102 "trtype": "TCP" 00:49:40.102 }, 00:49:40.102 "qid": 0, 00:49:40.102 "state": "enabled", 00:49:40.102 "thread": "nvmf_tgt_poll_group_000" 00:49:40.102 } 00:49:40.102 ]' 00:49:40.102 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:40.102 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:49:40.102 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:40.102 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:49:40.102 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:40.102 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:40.102 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:40.102 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:40.359 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:40.359 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:40.925 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:40.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:40.925 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:40.925 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:40.925 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:40.925 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:40.925 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:40.925 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:49:40.925 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:41.185 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:41.444 00:49:41.703 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:41.703 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:41.703 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:41.703 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:41.704 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:41.704 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:41.704 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:41.964 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:41.964 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:41.964 { 00:49:41.964 "auth": { 00:49:41.964 "dhgroup": "null", 00:49:41.964 "digest": "sha384", 00:49:41.964 "state": "completed" 00:49:41.964 }, 00:49:41.964 "cntlid": 53, 00:49:41.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:41.964 "listen_address": { 00:49:41.964 "adrfam": "IPv4", 00:49:41.964 "traddr": "10.0.0.3", 00:49:41.964 "trsvcid": "4420", 00:49:41.964 "trtype": "TCP" 00:49:41.964 }, 00:49:41.964 "peer_address": { 00:49:41.964 "adrfam": "IPv4", 00:49:41.964 "traddr": "10.0.0.1", 00:49:41.964 "trsvcid": "51016", 00:49:41.964 "trtype": "TCP" 00:49:41.964 }, 00:49:41.964 "qid": 0, 00:49:41.964 "state": "enabled", 00:49:41.964 "thread": "nvmf_tgt_poll_group_000" 00:49:41.964 } 00:49:41.964 ]' 00:49:41.964 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:41.964 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:49:41.964 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:41.964 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:49:41.964 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:41.964 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:41.964 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:41.964 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:42.223 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:42.223 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:42.793 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:42.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:42.793 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:42.793 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:42.793 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:42.793 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:42.793 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:42.793 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:49:42.793 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:49:43.052 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:49:43.052 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:43.052 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:43.053 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:49:43.053 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:49:43.053 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:43.053 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:49:43.053 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:43.053 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:43.053 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:43.053 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:49:43.053 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:43.053 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:43.312 00:49:43.312 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:43.312 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:43.312 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:43.571 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:43.571 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:43.571 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:43.571 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:43.571 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:43.571 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:43.571 { 00:49:43.571 "auth": { 00:49:43.571 "dhgroup": "null", 00:49:43.571 "digest": "sha384", 00:49:43.571 "state": "completed" 00:49:43.571 }, 00:49:43.571 "cntlid": 55, 00:49:43.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:43.571 "listen_address": { 00:49:43.571 "adrfam": "IPv4", 00:49:43.571 "traddr": "10.0.0.3", 00:49:43.571 "trsvcid": "4420", 00:49:43.571 "trtype": "TCP" 00:49:43.571 }, 00:49:43.571 "peer_address": { 00:49:43.571 "adrfam": "IPv4", 00:49:43.571 "traddr": "10.0.0.1", 00:49:43.571 "trsvcid": "51046", 00:49:43.571 "trtype": "TCP" 00:49:43.571 }, 00:49:43.571 "qid": 0, 00:49:43.571 "state": "enabled", 00:49:43.571 "thread": "nvmf_tgt_poll_group_000" 00:49:43.571 } 00:49:43.571 ]' 00:49:43.571 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:43.571 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:49:43.571 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:43.830 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:49:43.830 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:43.830 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:43.830 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:43.830 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:44.089 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:44.089 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:44.657 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:44.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:44.657 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:44.657 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:44.657 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:44.657 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:44.657 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:49:44.657 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:44.657 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:44.657 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:44.916 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:45.176 00:49:45.176 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:45.176 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:45.176 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:45.436 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:45.436 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:45.436 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:45.436 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:45.436 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:45.436 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:45.436 { 00:49:45.436 "auth": { 00:49:45.436 "dhgroup": "ffdhe2048", 00:49:45.436 "digest": "sha384", 00:49:45.436 "state": "completed" 00:49:45.436 }, 00:49:45.436 "cntlid": 57, 00:49:45.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:45.436 "listen_address": { 00:49:45.436 "adrfam": "IPv4", 00:49:45.436 "traddr": "10.0.0.3", 00:49:45.436 "trsvcid": "4420", 00:49:45.436 "trtype": "TCP" 00:49:45.436 }, 00:49:45.436 "peer_address": { 00:49:45.436 "adrfam": "IPv4", 00:49:45.436 "traddr": "10.0.0.1", 00:49:45.436 "trsvcid": "51078", 00:49:45.436 "trtype": "TCP" 00:49:45.436 }, 00:49:45.436 "qid": 0, 00:49:45.436 "state": "enabled", 00:49:45.436 "thread": "nvmf_tgt_poll_group_000" 00:49:45.436 } 00:49:45.436 ]' 00:49:45.436 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:45.695 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:49:45.695 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:45.695 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:49:45.695 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:45.695 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:45.695 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:45.695 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:45.954 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:45.954 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:46.522 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:46.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:46.522 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:46.522 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:46.522 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:46.522 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:46.522 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:46.522 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:46.522 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:46.784 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:47.044 00:49:47.044 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:47.044 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:47.044 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:47.303 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:47.303 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:47.303 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:47.303 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:47.303 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:47.303 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:47.303 { 00:49:47.303 "auth": { 00:49:47.303 "dhgroup": "ffdhe2048", 00:49:47.303 "digest": "sha384", 00:49:47.303 "state": "completed" 00:49:47.303 }, 00:49:47.303 "cntlid": 59, 00:49:47.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:47.303 "listen_address": { 00:49:47.303 "adrfam": "IPv4", 00:49:47.303 "traddr": "10.0.0.3", 00:49:47.303 "trsvcid": "4420", 00:49:47.303 "trtype": "TCP" 00:49:47.303 }, 00:49:47.303 "peer_address": { 00:49:47.303 "adrfam": "IPv4", 00:49:47.303 "traddr": "10.0.0.1", 00:49:47.303 "trsvcid": "51088", 00:49:47.303 "trtype": "TCP" 00:49:47.303 }, 00:49:47.303 "qid": 0, 00:49:47.303 "state": "enabled", 00:49:47.303 "thread": "nvmf_tgt_poll_group_000" 00:49:47.303 } 00:49:47.303 ]' 00:49:47.303 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:47.303 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:49:47.303 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:47.561 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:49:47.561 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:47.561 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:47.561 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:47.561 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:47.821 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:47.821 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:48.389 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:48.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:48.389 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:48.389 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:48.389 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:48.389 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:48.389 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:48.389 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:48.389 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:48.649 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:48.908 00:49:48.908 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:48.908 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:48.908 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:49.168 { 00:49:49.168 "auth": { 00:49:49.168 "dhgroup": "ffdhe2048", 00:49:49.168 "digest": "sha384", 00:49:49.168 "state": "completed" 00:49:49.168 }, 00:49:49.168 "cntlid": 61, 00:49:49.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:49.168 "listen_address": { 00:49:49.168 "adrfam": "IPv4", 00:49:49.168 "traddr": "10.0.0.3", 00:49:49.168 "trsvcid": "4420", 00:49:49.168 "trtype": "TCP" 00:49:49.168 }, 00:49:49.168 "peer_address": { 00:49:49.168 "adrfam": "IPv4", 00:49:49.168 "traddr": "10.0.0.1", 00:49:49.168 "trsvcid": "51122", 00:49:49.168 "trtype": "TCP" 00:49:49.168 }, 00:49:49.168 "qid": 0, 00:49:49.168 "state": "enabled", 00:49:49.168 "thread": "nvmf_tgt_poll_group_000" 00:49:49.168 } 00:49:49.168 ]' 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:49.168 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:49.427 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:49.427 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:50.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:50.373 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:50.632 00:49:50.632 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:50.632 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:50.632 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:50.893 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:50.893 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:50.893 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:50.893 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:50.893 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:50.893 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:50.893 { 00:49:50.893 "auth": { 00:49:50.893 "dhgroup": "ffdhe2048", 00:49:50.893 "digest": "sha384", 00:49:50.893 "state": "completed" 00:49:50.893 }, 00:49:50.893 "cntlid": 63, 00:49:50.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:50.893 "listen_address": { 00:49:50.893 "adrfam": "IPv4", 00:49:50.893 "traddr": "10.0.0.3", 00:49:50.893 "trsvcid": "4420", 00:49:50.893 "trtype": "TCP" 00:49:50.893 }, 00:49:50.893 "peer_address": { 00:49:50.893 "adrfam": "IPv4", 00:49:50.893 "traddr": "10.0.0.1", 00:49:50.893 "trsvcid": "54346", 00:49:50.893 "trtype": "TCP" 00:49:50.893 }, 00:49:50.893 "qid": 0, 00:49:50.893 "state": "enabled", 00:49:50.893 "thread": "nvmf_tgt_poll_group_000" 00:49:50.893 } 00:49:50.893 ]' 00:49:50.893 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:51.153 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:49:51.153 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:51.153 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:49:51.153 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:51.153 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:51.153 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:51.153 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:51.414 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:51.414 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:51.981 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:51.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:51.981 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:51.981 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:51.981 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:51.981 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:51.981 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:49:51.981 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:51.981 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:51.981 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:52.240 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:52.499 00:49:52.499 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:52.499 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:52.499 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:52.759 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:52.759 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:52.759 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:52.759 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:52.759 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:52.759 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:52.759 { 00:49:52.759 "auth": { 00:49:52.759 "dhgroup": "ffdhe3072", 00:49:52.759 "digest": "sha384", 00:49:52.759 "state": "completed" 00:49:52.759 }, 00:49:52.759 "cntlid": 65, 00:49:52.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:52.759 "listen_address": { 00:49:52.759 "adrfam": "IPv4", 00:49:52.759 "traddr": "10.0.0.3", 00:49:52.759 "trsvcid": "4420", 00:49:52.759 "trtype": "TCP" 00:49:52.759 }, 00:49:52.759 "peer_address": { 00:49:52.759 "adrfam": "IPv4", 00:49:52.759 "traddr": "10.0.0.1", 00:49:52.759 "trsvcid": "54362", 00:49:52.759 "trtype": "TCP" 00:49:52.759 }, 00:49:52.759 "qid": 0, 00:49:52.759 "state": "enabled", 00:49:52.759 "thread": "nvmf_tgt_poll_group_000" 00:49:52.759 } 00:49:52.759 ]' 00:49:52.759 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:53.018 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:49:53.018 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:53.018 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:49:53.018 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:53.018 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:53.018 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:53.018 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:53.278 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:53.278 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:49:53.846 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:53.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:53.846 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:53.846 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:53.846 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:53.846 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:53.846 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:53.846 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:53.846 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:54.105 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:49:54.364 00:49:54.364 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:54.364 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:54.364 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:54.623 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:54.623 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:54.623 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:54.623 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:54.623 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:54.623 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:54.623 { 00:49:54.623 "auth": { 00:49:54.623 "dhgroup": "ffdhe3072", 00:49:54.623 "digest": "sha384", 00:49:54.623 "state": "completed" 00:49:54.623 }, 00:49:54.623 "cntlid": 67, 00:49:54.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:54.623 "listen_address": { 00:49:54.623 "adrfam": "IPv4", 00:49:54.623 "traddr": "10.0.0.3", 00:49:54.623 "trsvcid": "4420", 00:49:54.623 "trtype": "TCP" 00:49:54.623 }, 00:49:54.623 "peer_address": { 00:49:54.623 "adrfam": "IPv4", 00:49:54.623 "traddr": "10.0.0.1", 00:49:54.623 "trsvcid": "54378", 00:49:54.623 "trtype": "TCP" 00:49:54.623 }, 00:49:54.623 "qid": 0, 00:49:54.623 "state": "enabled", 00:49:54.623 "thread": "nvmf_tgt_poll_group_000" 00:49:54.623 } 00:49:54.623 ]' 00:49:54.623 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:54.883 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:49:54.883 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:54.883 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:49:54.883 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:54.883 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:54.883 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:54.883 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:55.146 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:55.146 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:49:55.720 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:55.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:55.720 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:55.720 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:55.720 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:55.720 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:55.720 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:55.720 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:55.720 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:55.980 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:49:56.240 00:49:56.240 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:56.240 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:56.240 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:56.499 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:56.499 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:56.499 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:56.499 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:56.499 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:56.499 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:56.499 { 00:49:56.499 "auth": { 00:49:56.499 "dhgroup": "ffdhe3072", 00:49:56.499 "digest": "sha384", 00:49:56.499 "state": "completed" 00:49:56.499 }, 00:49:56.499 "cntlid": 69, 00:49:56.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:56.499 "listen_address": { 00:49:56.499 "adrfam": "IPv4", 00:49:56.499 "traddr": "10.0.0.3", 00:49:56.499 "trsvcid": "4420", 00:49:56.499 "trtype": "TCP" 00:49:56.499 }, 00:49:56.499 "peer_address": { 00:49:56.499 "adrfam": "IPv4", 00:49:56.499 "traddr": "10.0.0.1", 00:49:56.499 "trsvcid": "54402", 00:49:56.499 "trtype": "TCP" 00:49:56.499 }, 00:49:56.499 "qid": 0, 00:49:56.499 "state": "enabled", 00:49:56.499 "thread": "nvmf_tgt_poll_group_000" 00:49:56.499 } 00:49:56.499 ]' 00:49:56.499 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:56.499 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:49:56.499 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:56.499 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:49:56.759 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:56.759 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:56.759 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:56.759 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:57.018 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:57.018 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:49:57.586 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:57.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:57.586 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:57.586 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:57.586 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:57.586 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:57.586 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:57.586 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:57.586 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:49:57.844 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:49:57.844 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:57.844 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:57.844 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:49:57.844 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:49:57.844 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:57.845 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:49:57.845 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:57.845 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:57.845 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:57.845 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:49:57.845 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:57.845 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:49:58.103 00:49:58.103 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:49:58.103 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:49:58.103 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:49:58.361 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:58.361 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:49:58.361 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:58.361 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:58.361 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:58.361 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:49:58.361 { 00:49:58.361 "auth": { 00:49:58.361 "dhgroup": "ffdhe3072", 00:49:58.361 "digest": "sha384", 00:49:58.361 "state": "completed" 00:49:58.361 }, 00:49:58.361 "cntlid": 71, 00:49:58.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:49:58.361 "listen_address": { 00:49:58.361 "adrfam": "IPv4", 00:49:58.361 "traddr": "10.0.0.3", 00:49:58.361 "trsvcid": "4420", 00:49:58.361 "trtype": "TCP" 00:49:58.361 }, 00:49:58.361 "peer_address": { 00:49:58.361 "adrfam": "IPv4", 00:49:58.361 "traddr": "10.0.0.1", 00:49:58.361 "trsvcid": "54420", 00:49:58.361 "trtype": "TCP" 00:49:58.361 }, 00:49:58.361 "qid": 0, 00:49:58.361 "state": "enabled", 00:49:58.361 "thread": "nvmf_tgt_poll_group_000" 00:49:58.361 } 00:49:58.361 ]' 00:49:58.361 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:49:58.361 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:49:58.361 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:49:58.619 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:49:58.619 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:49:58.619 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:49:58.619 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:49:58.619 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:49:58.878 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:58.878 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:49:59.446 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:49:59.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:49:59.446 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:49:59.446 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:59.446 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:59.446 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:59.446 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:49:59.446 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:49:59.446 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:49:59.446 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:59.705 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:49:59.964 00:50:00.223 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:00.223 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:00.223 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:00.223 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:00.223 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:00.223 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:00.223 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:00.223 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:00.223 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:00.223 { 00:50:00.223 "auth": { 00:50:00.223 "dhgroup": "ffdhe4096", 00:50:00.223 "digest": "sha384", 00:50:00.223 "state": "completed" 00:50:00.223 }, 00:50:00.223 "cntlid": 73, 00:50:00.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:00.223 "listen_address": { 00:50:00.223 "adrfam": "IPv4", 00:50:00.223 "traddr": "10.0.0.3", 00:50:00.223 "trsvcid": "4420", 00:50:00.223 "trtype": "TCP" 00:50:00.223 }, 00:50:00.223 "peer_address": { 00:50:00.223 "adrfam": "IPv4", 00:50:00.223 "traddr": "10.0.0.1", 00:50:00.223 "trsvcid": "52818", 00:50:00.223 "trtype": "TCP" 00:50:00.223 }, 00:50:00.223 "qid": 0, 00:50:00.223 "state": "enabled", 00:50:00.223 "thread": "nvmf_tgt_poll_group_000" 00:50:00.223 } 00:50:00.223 ]' 00:50:00.223 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:00.482 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:50:00.482 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:00.482 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:50:00.482 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:00.482 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:00.482 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:00.482 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:00.739 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:00.740 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:01.307 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:01.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:01.307 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:01.307 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:01.307 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:01.307 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:01.307 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:01.307 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:50:01.307 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:01.565 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:01.822 00:50:02.081 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:02.081 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:02.081 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:02.339 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:02.340 { 00:50:02.340 "auth": { 00:50:02.340 "dhgroup": "ffdhe4096", 00:50:02.340 "digest": "sha384", 00:50:02.340 "state": "completed" 00:50:02.340 }, 00:50:02.340 "cntlid": 75, 00:50:02.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:02.340 "listen_address": { 00:50:02.340 "adrfam": "IPv4", 00:50:02.340 "traddr": "10.0.0.3", 00:50:02.340 "trsvcid": "4420", 00:50:02.340 "trtype": "TCP" 00:50:02.340 }, 00:50:02.340 "peer_address": { 00:50:02.340 "adrfam": "IPv4", 00:50:02.340 "traddr": "10.0.0.1", 00:50:02.340 "trsvcid": "52850", 00:50:02.340 "trtype": "TCP" 00:50:02.340 }, 00:50:02.340 "qid": 0, 00:50:02.340 "state": "enabled", 00:50:02.340 "thread": "nvmf_tgt_poll_group_000" 00:50:02.340 } 00:50:02.340 ]' 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:02.340 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:02.599 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:02.599 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:03.166 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:03.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:03.166 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:03.166 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:03.166 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:03.166 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:03.166 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:03.166 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:50:03.166 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:03.430 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:03.701 00:50:03.961 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:03.961 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:03.961 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:04.220 { 00:50:04.220 "auth": { 00:50:04.220 "dhgroup": "ffdhe4096", 00:50:04.220 "digest": "sha384", 00:50:04.220 "state": "completed" 00:50:04.220 }, 00:50:04.220 "cntlid": 77, 00:50:04.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:04.220 "listen_address": { 00:50:04.220 "adrfam": "IPv4", 00:50:04.220 "traddr": "10.0.0.3", 00:50:04.220 "trsvcid": "4420", 00:50:04.220 "trtype": "TCP" 00:50:04.220 }, 00:50:04.220 "peer_address": { 00:50:04.220 "adrfam": "IPv4", 00:50:04.220 "traddr": "10.0.0.1", 00:50:04.220 "trsvcid": "52876", 00:50:04.220 "trtype": "TCP" 00:50:04.220 }, 00:50:04.220 "qid": 0, 00:50:04.220 "state": "enabled", 00:50:04.220 "thread": "nvmf_tgt_poll_group_000" 00:50:04.220 } 00:50:04.220 ]' 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:04.220 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:04.480 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:04.480 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:05.047 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:05.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:05.047 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:05.047 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:05.047 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:05.047 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:05.047 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:05.047 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:50:05.047 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:50:05.306 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:50:05.306 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:05.306 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:50:05.307 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:50:05.307 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:05.307 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:05.307 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:50:05.307 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:05.307 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:05.307 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:05.307 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:05.307 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:05.307 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:05.565 00:50:05.824 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:05.824 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:05.824 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:05.824 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:06.083 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:06.083 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:06.083 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:06.083 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:06.083 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:06.083 { 00:50:06.083 "auth": { 00:50:06.083 "dhgroup": "ffdhe4096", 00:50:06.083 "digest": "sha384", 00:50:06.083 "state": "completed" 00:50:06.083 }, 00:50:06.083 "cntlid": 79, 00:50:06.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:06.083 "listen_address": { 00:50:06.083 "adrfam": "IPv4", 00:50:06.083 "traddr": "10.0.0.3", 00:50:06.083 "trsvcid": "4420", 00:50:06.083 "trtype": "TCP" 00:50:06.083 }, 00:50:06.083 "peer_address": { 00:50:06.083 "adrfam": "IPv4", 00:50:06.083 "traddr": "10.0.0.1", 00:50:06.083 "trsvcid": "52904", 00:50:06.083 "trtype": "TCP" 00:50:06.083 }, 00:50:06.083 "qid": 0, 00:50:06.083 "state": "enabled", 00:50:06.083 "thread": "nvmf_tgt_poll_group_000" 00:50:06.083 } 00:50:06.083 ]' 00:50:06.083 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:06.083 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:50:06.083 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:06.083 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:50:06.083 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:06.083 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:06.083 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:06.083 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:06.341 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:06.341 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:06.910 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:07.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:07.169 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:07.169 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:07.169 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:07.169 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:07.169 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:50:07.169 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:07.169 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:07.169 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:07.427 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:07.687 00:50:07.687 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:07.687 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:07.687 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:07.946 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:07.946 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:07.946 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:07.946 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:08.237 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:08.237 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:08.237 { 00:50:08.237 "auth": { 00:50:08.237 "dhgroup": "ffdhe6144", 00:50:08.237 "digest": "sha384", 00:50:08.237 "state": "completed" 00:50:08.237 }, 00:50:08.237 "cntlid": 81, 00:50:08.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:08.237 "listen_address": { 00:50:08.237 "adrfam": "IPv4", 00:50:08.237 "traddr": "10.0.0.3", 00:50:08.237 "trsvcid": "4420", 00:50:08.237 "trtype": "TCP" 00:50:08.237 }, 00:50:08.237 "peer_address": { 00:50:08.237 "adrfam": "IPv4", 00:50:08.237 "traddr": "10.0.0.1", 00:50:08.237 "trsvcid": "52932", 00:50:08.237 "trtype": "TCP" 00:50:08.237 }, 00:50:08.237 "qid": 0, 00:50:08.237 "state": "enabled", 00:50:08.237 "thread": "nvmf_tgt_poll_group_000" 00:50:08.237 } 00:50:08.237 ]' 00:50:08.237 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:08.237 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:50:08.237 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:08.237 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:50:08.237 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:08.237 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:08.237 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:08.237 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:08.510 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:08.510 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:09.078 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:09.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:09.078 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:09.078 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:09.078 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:09.336 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:09.336 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:09.336 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:09.336 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:09.336 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:50:09.336 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:09.336 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:50:09.336 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:50:09.595 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:09.596 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:09.596 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:09.596 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:09.596 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:09.596 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:09.596 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:09.596 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:09.596 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:09.854 00:50:09.854 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:09.854 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:09.854 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:10.113 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:10.113 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:10.113 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:10.113 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:10.113 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:10.113 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:10.113 { 00:50:10.113 "auth": { 00:50:10.113 "dhgroup": "ffdhe6144", 00:50:10.113 "digest": "sha384", 00:50:10.113 "state": "completed" 00:50:10.113 }, 00:50:10.113 "cntlid": 83, 00:50:10.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:10.113 "listen_address": { 00:50:10.113 "adrfam": "IPv4", 00:50:10.113 "traddr": "10.0.0.3", 00:50:10.113 "trsvcid": "4420", 00:50:10.113 "trtype": "TCP" 00:50:10.113 }, 00:50:10.113 "peer_address": { 00:50:10.113 "adrfam": "IPv4", 00:50:10.113 "traddr": "10.0.0.1", 00:50:10.113 "trsvcid": "50768", 00:50:10.113 "trtype": "TCP" 00:50:10.113 }, 00:50:10.113 "qid": 0, 00:50:10.113 "state": "enabled", 00:50:10.113 "thread": "nvmf_tgt_poll_group_000" 00:50:10.113 } 00:50:10.113 ]' 00:50:10.372 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:10.372 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:50:10.372 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:10.372 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:50:10.372 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:10.372 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:10.372 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:10.372 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:10.631 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:10.631 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:11.202 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:11.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:11.202 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:11.202 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:11.202 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:11.202 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:11.202 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:11.202 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:11.202 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:11.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:12.031 00:50:12.031 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:12.031 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:12.031 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:12.031 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:12.031 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:12.031 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:12.031 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:12.031 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:12.031 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:12.031 { 00:50:12.031 "auth": { 00:50:12.031 "dhgroup": "ffdhe6144", 00:50:12.031 "digest": "sha384", 00:50:12.031 "state": "completed" 00:50:12.031 }, 00:50:12.031 "cntlid": 85, 00:50:12.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:12.031 "listen_address": { 00:50:12.031 "adrfam": "IPv4", 00:50:12.031 "traddr": "10.0.0.3", 00:50:12.031 "trsvcid": "4420", 00:50:12.031 "trtype": "TCP" 00:50:12.031 }, 00:50:12.031 "peer_address": { 00:50:12.031 "adrfam": "IPv4", 00:50:12.031 "traddr": "10.0.0.1", 00:50:12.031 "trsvcid": "50782", 00:50:12.031 "trtype": "TCP" 00:50:12.031 }, 00:50:12.031 "qid": 0, 00:50:12.031 "state": "enabled", 00:50:12.031 "thread": "nvmf_tgt_poll_group_000" 00:50:12.031 } 00:50:12.031 ]' 00:50:12.031 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:12.291 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:50:12.291 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:12.291 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:50:12.291 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:12.291 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:12.291 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:12.291 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:12.550 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:12.551 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:13.119 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:13.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:13.119 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:13.119 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:13.119 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:13.119 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:13.119 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:13.119 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:13.119 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:13.378 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:13.947 00:50:13.947 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:13.947 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:13.947 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:14.206 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:14.206 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:14.206 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:14.206 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:14.206 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:14.206 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:14.206 { 00:50:14.206 "auth": { 00:50:14.206 "dhgroup": "ffdhe6144", 00:50:14.206 "digest": "sha384", 00:50:14.206 "state": "completed" 00:50:14.206 }, 00:50:14.206 "cntlid": 87, 00:50:14.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:14.206 "listen_address": { 00:50:14.206 "adrfam": "IPv4", 00:50:14.206 "traddr": "10.0.0.3", 00:50:14.206 "trsvcid": "4420", 00:50:14.206 "trtype": "TCP" 00:50:14.206 }, 00:50:14.206 "peer_address": { 00:50:14.206 "adrfam": "IPv4", 00:50:14.206 "traddr": "10.0.0.1", 00:50:14.206 "trsvcid": "50816", 00:50:14.206 "trtype": "TCP" 00:50:14.206 }, 00:50:14.206 "qid": 0, 00:50:14.206 "state": "enabled", 00:50:14.206 "thread": "nvmf_tgt_poll_group_000" 00:50:14.206 } 00:50:14.206 ]' 00:50:14.206 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:14.206 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:50:14.206 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:14.206 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:50:14.206 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:14.465 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:14.465 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:14.465 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:14.725 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:14.725 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:15.294 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:15.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:15.294 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:15.294 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:15.294 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:15.294 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:15.294 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:50:15.294 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:15.294 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:15.294 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:15.553 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:50:15.553 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:15.553 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:50:15.553 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:50:15.553 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:50:15.554 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:15.554 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:15.554 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:15.554 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:15.554 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:15.554 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:15.554 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:15.554 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:16.122 00:50:16.122 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:16.122 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:16.122 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:16.381 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:16.381 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:16.381 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:16.381 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:16.381 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:16.381 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:16.381 { 00:50:16.381 "auth": { 00:50:16.381 "dhgroup": "ffdhe8192", 00:50:16.381 "digest": "sha384", 00:50:16.381 "state": "completed" 00:50:16.381 }, 00:50:16.381 "cntlid": 89, 00:50:16.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:16.381 "listen_address": { 00:50:16.381 "adrfam": "IPv4", 00:50:16.381 "traddr": "10.0.0.3", 00:50:16.381 "trsvcid": "4420", 00:50:16.381 "trtype": "TCP" 00:50:16.381 }, 00:50:16.381 "peer_address": { 00:50:16.381 "adrfam": "IPv4", 00:50:16.381 "traddr": "10.0.0.1", 00:50:16.381 "trsvcid": "50850", 00:50:16.381 "trtype": "TCP" 00:50:16.381 }, 00:50:16.381 "qid": 0, 00:50:16.381 "state": "enabled", 00:50:16.381 "thread": "nvmf_tgt_poll_group_000" 00:50:16.381 } 00:50:16.381 ]' 00:50:16.381 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:16.639 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:50:16.639 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:16.639 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:50:16.639 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:16.639 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:16.639 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:16.639 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:16.897 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:16.897 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:17.463 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:17.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:17.463 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:17.463 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:17.463 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:17.463 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:17.463 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:17.463 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:17.463 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:17.735 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:18.321 00:50:18.321 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:18.321 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:18.321 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:18.589 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:18.589 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:18.589 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:18.589 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:18.589 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:18.589 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:18.589 { 00:50:18.589 "auth": { 00:50:18.589 "dhgroup": "ffdhe8192", 00:50:18.589 "digest": "sha384", 00:50:18.589 "state": "completed" 00:50:18.589 }, 00:50:18.589 "cntlid": 91, 00:50:18.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:18.589 "listen_address": { 00:50:18.589 "adrfam": "IPv4", 00:50:18.589 "traddr": "10.0.0.3", 00:50:18.589 "trsvcid": "4420", 00:50:18.589 "trtype": "TCP" 00:50:18.589 }, 00:50:18.589 "peer_address": { 00:50:18.589 "adrfam": "IPv4", 00:50:18.589 "traddr": "10.0.0.1", 00:50:18.589 "trsvcid": "50880", 00:50:18.589 "trtype": "TCP" 00:50:18.589 }, 00:50:18.589 "qid": 0, 00:50:18.589 "state": "enabled", 00:50:18.589 "thread": "nvmf_tgt_poll_group_000" 00:50:18.589 } 00:50:18.589 ]' 00:50:18.589 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:18.589 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:50:18.589 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:18.850 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:50:18.850 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:18.850 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:18.850 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:18.850 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:18.850 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:18.850 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:19.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:19.788 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:20.356 00:50:20.357 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:20.357 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:20.357 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:20.616 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:20.616 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:20.616 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:20.616 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:20.616 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:20.616 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:20.616 { 00:50:20.616 "auth": { 00:50:20.616 "dhgroup": "ffdhe8192", 00:50:20.616 "digest": "sha384", 00:50:20.616 "state": "completed" 00:50:20.616 }, 00:50:20.616 "cntlid": 93, 00:50:20.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:20.616 "listen_address": { 00:50:20.616 "adrfam": "IPv4", 00:50:20.616 "traddr": "10.0.0.3", 00:50:20.616 "trsvcid": "4420", 00:50:20.616 "trtype": "TCP" 00:50:20.616 }, 00:50:20.616 "peer_address": { 00:50:20.616 "adrfam": "IPv4", 00:50:20.616 "traddr": "10.0.0.1", 00:50:20.616 "trsvcid": "34472", 00:50:20.616 "trtype": "TCP" 00:50:20.616 }, 00:50:20.616 "qid": 0, 00:50:20.616 "state": "enabled", 00:50:20.616 "thread": "nvmf_tgt_poll_group_000" 00:50:20.616 } 00:50:20.616 ]' 00:50:20.616 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:20.876 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:50:20.876 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:20.876 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:50:20.876 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:20.876 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:20.876 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:20.876 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:21.136 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:21.136 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:21.704 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:21.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:21.704 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:21.704 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.704 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:21.704 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.704 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:21.704 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:21.704 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:21.964 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:22.558 00:50:22.558 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:22.558 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:22.558 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:22.817 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:22.817 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:22.817 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:22.817 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:22.817 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:22.817 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:22.817 { 00:50:22.817 "auth": { 00:50:22.817 "dhgroup": "ffdhe8192", 00:50:22.817 "digest": "sha384", 00:50:22.817 "state": "completed" 00:50:22.817 }, 00:50:22.817 "cntlid": 95, 00:50:22.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:22.817 "listen_address": { 00:50:22.817 "adrfam": "IPv4", 00:50:22.817 "traddr": "10.0.0.3", 00:50:22.817 "trsvcid": "4420", 00:50:22.817 "trtype": "TCP" 00:50:22.817 }, 00:50:22.817 "peer_address": { 00:50:22.817 "adrfam": "IPv4", 00:50:22.817 "traddr": "10.0.0.1", 00:50:22.817 "trsvcid": "34504", 00:50:22.817 "trtype": "TCP" 00:50:22.817 }, 00:50:22.817 "qid": 0, 00:50:22.817 "state": "enabled", 00:50:22.817 "thread": "nvmf_tgt_poll_group_000" 00:50:22.817 } 00:50:22.817 ]' 00:50:22.817 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:22.817 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:50:22.817 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:23.077 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:50:23.077 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:23.077 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:23.077 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:23.077 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:23.335 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:23.336 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:23.907 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:23.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:23.907 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:23.907 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.907 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:23.907 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.907 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:50:23.907 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:50:23.907 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:23.907 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:50:23.907 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:24.166 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:24.426 00:50:24.685 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:24.685 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:24.685 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:24.685 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:24.685 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:24.685 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:24.685 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:24.944 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:24.944 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:24.944 { 00:50:24.944 "auth": { 00:50:24.944 "dhgroup": "null", 00:50:24.944 "digest": "sha512", 00:50:24.944 "state": "completed" 00:50:24.944 }, 00:50:24.944 "cntlid": 97, 00:50:24.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:24.944 "listen_address": { 00:50:24.944 "adrfam": "IPv4", 00:50:24.944 "traddr": "10.0.0.3", 00:50:24.944 "trsvcid": "4420", 00:50:24.944 "trtype": "TCP" 00:50:24.944 }, 00:50:24.944 "peer_address": { 00:50:24.944 "adrfam": "IPv4", 00:50:24.944 "traddr": "10.0.0.1", 00:50:24.944 "trsvcid": "34530", 00:50:24.944 "trtype": "TCP" 00:50:24.944 }, 00:50:24.944 "qid": 0, 00:50:24.944 "state": "enabled", 00:50:24.944 "thread": "nvmf_tgt_poll_group_000" 00:50:24.944 } 00:50:24.944 ]' 00:50:24.944 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:24.944 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:24.944 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:24.944 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:50:24.944 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:24.944 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:24.944 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:24.944 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:25.204 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:25.204 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:25.774 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:25.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:25.774 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:25.774 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:25.774 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:25.774 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:25.774 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:25.774 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:50:25.774 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:26.034 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:26.294 00:50:26.294 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:26.294 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:26.294 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:26.863 { 00:50:26.863 "auth": { 00:50:26.863 "dhgroup": "null", 00:50:26.863 "digest": "sha512", 00:50:26.863 "state": "completed" 00:50:26.863 }, 00:50:26.863 "cntlid": 99, 00:50:26.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:26.863 "listen_address": { 00:50:26.863 "adrfam": "IPv4", 00:50:26.863 "traddr": "10.0.0.3", 00:50:26.863 "trsvcid": "4420", 00:50:26.863 "trtype": "TCP" 00:50:26.863 }, 00:50:26.863 "peer_address": { 00:50:26.863 "adrfam": "IPv4", 00:50:26.863 "traddr": "10.0.0.1", 00:50:26.863 "trsvcid": "34556", 00:50:26.863 "trtype": "TCP" 00:50:26.863 }, 00:50:26.863 "qid": 0, 00:50:26.863 "state": "enabled", 00:50:26.863 "thread": "nvmf_tgt_poll_group_000" 00:50:26.863 } 00:50:26.863 ]' 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:26.863 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:27.127 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:27.127 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:27.709 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:27.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:27.709 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:27.709 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:27.709 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:27.709 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:27.709 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:27.709 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:50:27.709 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:27.969 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:28.232 00:50:28.232 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:28.232 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:28.232 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:28.492 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:28.492 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:28.492 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:28.492 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:28.492 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:28.492 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:28.492 { 00:50:28.492 "auth": { 00:50:28.492 "dhgroup": "null", 00:50:28.492 "digest": "sha512", 00:50:28.492 "state": "completed" 00:50:28.492 }, 00:50:28.492 "cntlid": 101, 00:50:28.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:28.492 "listen_address": { 00:50:28.492 "adrfam": "IPv4", 00:50:28.492 "traddr": "10.0.0.3", 00:50:28.493 "trsvcid": "4420", 00:50:28.493 "trtype": "TCP" 00:50:28.493 }, 00:50:28.493 "peer_address": { 00:50:28.493 "adrfam": "IPv4", 00:50:28.493 "traddr": "10.0.0.1", 00:50:28.493 "trsvcid": "34590", 00:50:28.493 "trtype": "TCP" 00:50:28.493 }, 00:50:28.493 "qid": 0, 00:50:28.493 "state": "enabled", 00:50:28.493 "thread": "nvmf_tgt_poll_group_000" 00:50:28.493 } 00:50:28.493 ]' 00:50:28.493 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:28.752 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:28.752 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:28.752 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:50:28.752 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:28.752 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:28.752 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:28.752 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:29.012 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:29.012 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:29.580 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:29.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:29.580 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:29.580 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:29.580 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:29.580 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:29.580 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:29.580 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:50:29.580 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:29.839 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:30.099 00:50:30.099 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:30.099 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:30.099 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:30.357 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:30.357 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:30.357 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:30.357 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:30.357 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:30.357 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:30.357 { 00:50:30.357 "auth": { 00:50:30.357 "dhgroup": "null", 00:50:30.357 "digest": "sha512", 00:50:30.357 "state": "completed" 00:50:30.357 }, 00:50:30.357 "cntlid": 103, 00:50:30.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:30.358 "listen_address": { 00:50:30.358 "adrfam": "IPv4", 00:50:30.358 "traddr": "10.0.0.3", 00:50:30.358 "trsvcid": "4420", 00:50:30.358 "trtype": "TCP" 00:50:30.358 }, 00:50:30.358 "peer_address": { 00:50:30.358 "adrfam": "IPv4", 00:50:30.358 "traddr": "10.0.0.1", 00:50:30.358 "trsvcid": "37128", 00:50:30.358 "trtype": "TCP" 00:50:30.358 }, 00:50:30.358 "qid": 0, 00:50:30.358 "state": "enabled", 00:50:30.358 "thread": "nvmf_tgt_poll_group_000" 00:50:30.358 } 00:50:30.358 ]' 00:50:30.358 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:30.358 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:30.358 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:30.617 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:50:30.617 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:30.617 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:30.617 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:30.617 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:30.875 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:30.875 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:31.442 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:31.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:31.442 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:31.442 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:31.442 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:31.442 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:31.442 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:50:31.442 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:31.442 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:31.442 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:31.702 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:31.980 00:50:31.980 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:31.980 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:31.980 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:32.267 { 00:50:32.267 "auth": { 00:50:32.267 "dhgroup": "ffdhe2048", 00:50:32.267 "digest": "sha512", 00:50:32.267 "state": "completed" 00:50:32.267 }, 00:50:32.267 "cntlid": 105, 00:50:32.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:32.267 "listen_address": { 00:50:32.267 "adrfam": "IPv4", 00:50:32.267 "traddr": "10.0.0.3", 00:50:32.267 "trsvcid": "4420", 00:50:32.267 "trtype": "TCP" 00:50:32.267 }, 00:50:32.267 "peer_address": { 00:50:32.267 "adrfam": "IPv4", 00:50:32.267 "traddr": "10.0.0.1", 00:50:32.267 "trsvcid": "37152", 00:50:32.267 "trtype": "TCP" 00:50:32.267 }, 00:50:32.267 "qid": 0, 00:50:32.267 "state": "enabled", 00:50:32.267 "thread": "nvmf_tgt_poll_group_000" 00:50:32.267 } 00:50:32.267 ]' 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:32.267 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:32.527 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:32.527 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:33.096 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:33.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:33.096 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:33.096 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:33.096 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:33.096 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:33.096 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:33.096 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:33.096 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:33.355 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:50:33.355 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:33.355 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:33.355 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:50:33.355 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:33.355 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:33.355 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:33.355 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:33.355 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:33.355 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:33.355 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:33.355 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:33.356 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:33.614 00:50:33.874 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:33.874 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:33.874 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:33.874 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:33.874 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:33.874 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:33.874 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:34.133 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:34.133 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:34.133 { 00:50:34.133 "auth": { 00:50:34.133 "dhgroup": "ffdhe2048", 00:50:34.133 "digest": "sha512", 00:50:34.133 "state": "completed" 00:50:34.133 }, 00:50:34.133 "cntlid": 107, 00:50:34.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:34.133 "listen_address": { 00:50:34.133 "adrfam": "IPv4", 00:50:34.133 "traddr": "10.0.0.3", 00:50:34.133 "trsvcid": "4420", 00:50:34.133 "trtype": "TCP" 00:50:34.133 }, 00:50:34.133 "peer_address": { 00:50:34.133 "adrfam": "IPv4", 00:50:34.133 "traddr": "10.0.0.1", 00:50:34.133 "trsvcid": "37194", 00:50:34.133 "trtype": "TCP" 00:50:34.133 }, 00:50:34.133 "qid": 0, 00:50:34.133 "state": "enabled", 00:50:34.133 "thread": "nvmf_tgt_poll_group_000" 00:50:34.133 } 00:50:34.133 ]' 00:50:34.133 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:34.133 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:34.133 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:34.133 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:50:34.133 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:34.133 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:34.133 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:34.133 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:34.393 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:34.393 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:34.962 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:34.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:34.962 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:34.962 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:34.962 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:34.962 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:34.963 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:34.963 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:34.963 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:35.222 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:35.481 00:50:35.740 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:35.740 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:35.740 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:35.999 { 00:50:35.999 "auth": { 00:50:35.999 "dhgroup": "ffdhe2048", 00:50:35.999 "digest": "sha512", 00:50:35.999 "state": "completed" 00:50:35.999 }, 00:50:35.999 "cntlid": 109, 00:50:35.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:35.999 "listen_address": { 00:50:35.999 "adrfam": "IPv4", 00:50:35.999 "traddr": "10.0.0.3", 00:50:35.999 "trsvcid": "4420", 00:50:35.999 "trtype": "TCP" 00:50:35.999 }, 00:50:35.999 "peer_address": { 00:50:35.999 "adrfam": "IPv4", 00:50:35.999 "traddr": "10.0.0.1", 00:50:35.999 "trsvcid": "37208", 00:50:35.999 "trtype": "TCP" 00:50:35.999 }, 00:50:35.999 "qid": 0, 00:50:35.999 "state": "enabled", 00:50:35.999 "thread": "nvmf_tgt_poll_group_000" 00:50:35.999 } 00:50:35.999 ]' 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:35.999 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:36.257 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:36.257 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:36.826 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:36.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:36.826 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:36.826 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:36.826 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:36.826 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:36.826 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:36.826 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:36.826 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:37.085 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:50:37.085 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:37.085 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:37.085 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:50:37.085 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:37.085 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:37.085 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:50:37.085 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:37.085 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:37.085 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:37.085 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:37.085 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:37.085 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:37.345 00:50:37.345 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:37.345 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:37.345 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:37.605 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:37.605 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:37.605 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:37.605 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:37.605 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:37.605 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:37.605 { 00:50:37.605 "auth": { 00:50:37.605 "dhgroup": "ffdhe2048", 00:50:37.605 "digest": "sha512", 00:50:37.605 "state": "completed" 00:50:37.605 }, 00:50:37.605 "cntlid": 111, 00:50:37.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:37.605 "listen_address": { 00:50:37.605 "adrfam": "IPv4", 00:50:37.605 "traddr": "10.0.0.3", 00:50:37.605 "trsvcid": "4420", 00:50:37.605 "trtype": "TCP" 00:50:37.605 }, 00:50:37.605 "peer_address": { 00:50:37.605 "adrfam": "IPv4", 00:50:37.605 "traddr": "10.0.0.1", 00:50:37.605 "trsvcid": "37224", 00:50:37.605 "trtype": "TCP" 00:50:37.605 }, 00:50:37.605 "qid": 0, 00:50:37.605 "state": "enabled", 00:50:37.605 "thread": "nvmf_tgt_poll_group_000" 00:50:37.605 } 00:50:37.605 ]' 00:50:37.605 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:37.605 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:37.605 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:37.865 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:50:37.865 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:37.865 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:37.865 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:37.865 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:38.125 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:38.125 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:38.693 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:38.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:38.693 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:38.693 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:38.693 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:38.693 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:38.693 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:50:38.693 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:38.693 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:38.693 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:38.953 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:39.212 00:50:39.212 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:39.212 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:39.212 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:39.472 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:39.472 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:39.472 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:39.472 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:39.472 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:39.472 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:39.472 { 00:50:39.472 "auth": { 00:50:39.472 "dhgroup": "ffdhe3072", 00:50:39.472 "digest": "sha512", 00:50:39.472 "state": "completed" 00:50:39.472 }, 00:50:39.472 "cntlid": 113, 00:50:39.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:39.472 "listen_address": { 00:50:39.472 "adrfam": "IPv4", 00:50:39.472 "traddr": "10.0.0.3", 00:50:39.472 "trsvcid": "4420", 00:50:39.472 "trtype": "TCP" 00:50:39.472 }, 00:50:39.472 "peer_address": { 00:50:39.472 "adrfam": "IPv4", 00:50:39.472 "traddr": "10.0.0.1", 00:50:39.472 "trsvcid": "37254", 00:50:39.472 "trtype": "TCP" 00:50:39.472 }, 00:50:39.472 "qid": 0, 00:50:39.472 "state": "enabled", 00:50:39.472 "thread": "nvmf_tgt_poll_group_000" 00:50:39.472 } 00:50:39.472 ]' 00:50:39.472 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:39.472 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:39.472 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:39.472 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:50:39.472 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:39.731 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:39.732 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:39.732 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:39.732 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:39.732 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:40.301 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:40.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:40.561 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:40.821 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:40.821 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:40.821 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:40.821 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:41.080 00:50:41.080 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:41.080 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:41.080 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:41.339 { 00:50:41.339 "auth": { 00:50:41.339 "dhgroup": "ffdhe3072", 00:50:41.339 "digest": "sha512", 00:50:41.339 "state": "completed" 00:50:41.339 }, 00:50:41.339 "cntlid": 115, 00:50:41.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:41.339 "listen_address": { 00:50:41.339 "adrfam": "IPv4", 00:50:41.339 "traddr": "10.0.0.3", 00:50:41.339 "trsvcid": "4420", 00:50:41.339 "trtype": "TCP" 00:50:41.339 }, 00:50:41.339 "peer_address": { 00:50:41.339 "adrfam": "IPv4", 00:50:41.339 "traddr": "10.0.0.1", 00:50:41.339 "trsvcid": "34536", 00:50:41.339 "trtype": "TCP" 00:50:41.339 }, 00:50:41.339 "qid": 0, 00:50:41.339 "state": "enabled", 00:50:41.339 "thread": "nvmf_tgt_poll_group_000" 00:50:41.339 } 00:50:41.339 ]' 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:41.339 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:41.599 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:41.599 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:42.167 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:42.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:42.167 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:42.167 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:42.167 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:42.167 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:42.167 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:42.167 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:42.167 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:42.427 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:42.686 00:50:42.945 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:42.945 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:42.945 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:43.203 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:43.203 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:43.203 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:43.203 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:43.203 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:43.203 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:43.203 { 00:50:43.203 "auth": { 00:50:43.203 "dhgroup": "ffdhe3072", 00:50:43.203 "digest": "sha512", 00:50:43.203 "state": "completed" 00:50:43.203 }, 00:50:43.203 "cntlid": 117, 00:50:43.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:43.203 "listen_address": { 00:50:43.203 "adrfam": "IPv4", 00:50:43.203 "traddr": "10.0.0.3", 00:50:43.203 "trsvcid": "4420", 00:50:43.203 "trtype": "TCP" 00:50:43.203 }, 00:50:43.203 "peer_address": { 00:50:43.203 "adrfam": "IPv4", 00:50:43.203 "traddr": "10.0.0.1", 00:50:43.203 "trsvcid": "34554", 00:50:43.203 "trtype": "TCP" 00:50:43.203 }, 00:50:43.203 "qid": 0, 00:50:43.203 "state": "enabled", 00:50:43.204 "thread": "nvmf_tgt_poll_group_000" 00:50:43.204 } 00:50:43.204 ]' 00:50:43.204 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:43.204 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:43.204 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:43.204 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:50:43.204 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:43.204 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:43.204 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:43.204 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:43.463 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:43.463 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:44.033 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:44.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:44.033 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:44.033 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:44.033 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:44.033 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:44.033 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:44.033 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:44.033 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:44.291 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:44.551 00:50:44.810 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:44.810 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:44.810 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:44.810 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:44.810 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:44.810 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:44.810 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:45.070 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:45.070 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:45.070 { 00:50:45.070 "auth": { 00:50:45.070 "dhgroup": "ffdhe3072", 00:50:45.070 "digest": "sha512", 00:50:45.070 "state": "completed" 00:50:45.070 }, 00:50:45.070 "cntlid": 119, 00:50:45.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:45.070 "listen_address": { 00:50:45.070 "adrfam": "IPv4", 00:50:45.070 "traddr": "10.0.0.3", 00:50:45.070 "trsvcid": "4420", 00:50:45.070 "trtype": "TCP" 00:50:45.070 }, 00:50:45.070 "peer_address": { 00:50:45.070 "adrfam": "IPv4", 00:50:45.070 "traddr": "10.0.0.1", 00:50:45.070 "trsvcid": "34570", 00:50:45.070 "trtype": "TCP" 00:50:45.070 }, 00:50:45.070 "qid": 0, 00:50:45.070 "state": "enabled", 00:50:45.070 "thread": "nvmf_tgt_poll_group_000" 00:50:45.070 } 00:50:45.070 ]' 00:50:45.070 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:45.070 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:45.070 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:45.070 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:50:45.070 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:45.070 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:45.070 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:45.070 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:45.329 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:45.329 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:45.897 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:45.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:45.897 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:45.897 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:45.897 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:45.897 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:45.897 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:50:45.898 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:45.898 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:45.898 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:46.156 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:46.415 00:50:46.415 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:46.415 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:46.415 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:46.674 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:46.674 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:46.674 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:46.674 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:46.674 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:46.674 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:46.674 { 00:50:46.674 "auth": { 00:50:46.674 "dhgroup": "ffdhe4096", 00:50:46.674 "digest": "sha512", 00:50:46.674 "state": "completed" 00:50:46.674 }, 00:50:46.674 "cntlid": 121, 00:50:46.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:46.674 "listen_address": { 00:50:46.674 "adrfam": "IPv4", 00:50:46.674 "traddr": "10.0.0.3", 00:50:46.674 "trsvcid": "4420", 00:50:46.674 "trtype": "TCP" 00:50:46.674 }, 00:50:46.674 "peer_address": { 00:50:46.674 "adrfam": "IPv4", 00:50:46.674 "traddr": "10.0.0.1", 00:50:46.674 "trsvcid": "34596", 00:50:46.674 "trtype": "TCP" 00:50:46.674 }, 00:50:46.674 "qid": 0, 00:50:46.674 "state": "enabled", 00:50:46.674 "thread": "nvmf_tgt_poll_group_000" 00:50:46.674 } 00:50:46.674 ]' 00:50:46.674 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:46.935 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:46.935 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:46.935 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:50:46.935 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:46.935 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:46.936 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:46.936 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:47.195 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:47.195 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:47.765 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:47.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:47.765 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:47.765 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:47.765 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:47.765 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:47.765 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:47.765 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:47.765 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:48.025 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:50:48.025 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:48.025 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:48.025 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:50:48.025 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:48.026 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:48.026 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:48.026 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:48.026 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:48.026 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:48.026 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:48.026 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:48.026 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:48.286 00:50:48.286 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:48.286 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:48.286 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:48.855 { 00:50:48.855 "auth": { 00:50:48.855 "dhgroup": "ffdhe4096", 00:50:48.855 "digest": "sha512", 00:50:48.855 "state": "completed" 00:50:48.855 }, 00:50:48.855 "cntlid": 123, 00:50:48.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:48.855 "listen_address": { 00:50:48.855 "adrfam": "IPv4", 00:50:48.855 "traddr": "10.0.0.3", 00:50:48.855 "trsvcid": "4420", 00:50:48.855 "trtype": "TCP" 00:50:48.855 }, 00:50:48.855 "peer_address": { 00:50:48.855 "adrfam": "IPv4", 00:50:48.855 "traddr": "10.0.0.1", 00:50:48.855 "trsvcid": "34624", 00:50:48.855 "trtype": "TCP" 00:50:48.855 }, 00:50:48.855 "qid": 0, 00:50:48.855 "state": "enabled", 00:50:48.855 "thread": "nvmf_tgt_poll_group_000" 00:50:48.855 } 00:50:48.855 ]' 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:48.855 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:49.115 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:49.115 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:49.684 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:49.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:49.684 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:49.684 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:49.684 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:49.684 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:49.684 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:49.684 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:49.684 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:49.944 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:50.202 00:50:50.202 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:50.202 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:50.202 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:50.460 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:50.460 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:50.460 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:50.460 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:50.460 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:50.460 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:50.460 { 00:50:50.460 "auth": { 00:50:50.460 "dhgroup": "ffdhe4096", 00:50:50.460 "digest": "sha512", 00:50:50.460 "state": "completed" 00:50:50.460 }, 00:50:50.460 "cntlid": 125, 00:50:50.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:50.460 "listen_address": { 00:50:50.460 "adrfam": "IPv4", 00:50:50.460 "traddr": "10.0.0.3", 00:50:50.460 "trsvcid": "4420", 00:50:50.460 "trtype": "TCP" 00:50:50.460 }, 00:50:50.460 "peer_address": { 00:50:50.460 "adrfam": "IPv4", 00:50:50.460 "traddr": "10.0.0.1", 00:50:50.460 "trsvcid": "33650", 00:50:50.460 "trtype": "TCP" 00:50:50.460 }, 00:50:50.460 "qid": 0, 00:50:50.460 "state": "enabled", 00:50:50.460 "thread": "nvmf_tgt_poll_group_000" 00:50:50.460 } 00:50:50.460 ]' 00:50:50.460 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:50.720 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:50.720 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:50.720 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:50:50.720 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:50.720 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:50.720 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:50.720 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:50.979 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:50.979 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:51.552 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:51.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:51.552 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:51.552 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:51.552 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:51.552 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:51.552 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:51.552 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:51.552 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:51.816 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:52.074 00:50:52.074 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:52.074 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:52.074 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:52.333 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:52.333 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:52.333 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:52.333 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:52.333 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:52.333 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:52.333 { 00:50:52.333 "auth": { 00:50:52.333 "dhgroup": "ffdhe4096", 00:50:52.333 "digest": "sha512", 00:50:52.333 "state": "completed" 00:50:52.333 }, 00:50:52.333 "cntlid": 127, 00:50:52.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:52.333 "listen_address": { 00:50:52.333 "adrfam": "IPv4", 00:50:52.333 "traddr": "10.0.0.3", 00:50:52.333 "trsvcid": "4420", 00:50:52.333 "trtype": "TCP" 00:50:52.333 }, 00:50:52.333 "peer_address": { 00:50:52.333 "adrfam": "IPv4", 00:50:52.333 "traddr": "10.0.0.1", 00:50:52.333 "trsvcid": "33680", 00:50:52.333 "trtype": "TCP" 00:50:52.333 }, 00:50:52.333 "qid": 0, 00:50:52.333 "state": "enabled", 00:50:52.333 "thread": "nvmf_tgt_poll_group_000" 00:50:52.333 } 00:50:52.333 ]' 00:50:52.333 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:52.333 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:52.333 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:52.593 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:50:52.593 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:52.593 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:52.593 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:52.593 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:52.852 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:52.853 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:50:53.421 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:53.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:53.421 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:53.421 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:53.421 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:53.421 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:53.421 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:50:53.421 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:53.421 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:53.421 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:53.680 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:50:53.680 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:53.680 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:53.680 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:50:53.680 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:50:53.680 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:53.680 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:53.680 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:53.680 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:53.680 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:53.680 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:53.681 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:53.681 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:54.248 00:50:54.249 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:54.249 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:54.249 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:54.507 { 00:50:54.507 "auth": { 00:50:54.507 "dhgroup": "ffdhe6144", 00:50:54.507 "digest": "sha512", 00:50:54.507 "state": "completed" 00:50:54.507 }, 00:50:54.507 "cntlid": 129, 00:50:54.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:54.507 "listen_address": { 00:50:54.507 "adrfam": "IPv4", 00:50:54.507 "traddr": "10.0.0.3", 00:50:54.507 "trsvcid": "4420", 00:50:54.507 "trtype": "TCP" 00:50:54.507 }, 00:50:54.507 "peer_address": { 00:50:54.507 "adrfam": "IPv4", 00:50:54.507 "traddr": "10.0.0.1", 00:50:54.507 "trsvcid": "33708", 00:50:54.507 "trtype": "TCP" 00:50:54.507 }, 00:50:54.507 "qid": 0, 00:50:54.507 "state": "enabled", 00:50:54.507 "thread": "nvmf_tgt_poll_group_000" 00:50:54.507 } 00:50:54.507 ]' 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:54.507 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:54.767 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:54.767 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:50:55.336 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:55.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:55.336 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:55.336 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.336 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:55.336 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.336 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:55.336 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:55.336 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:55.596 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:56.164 00:50:56.164 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:56.164 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:56.164 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:56.164 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:56.164 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:56.164 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.164 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:56.423 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.423 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:56.423 { 00:50:56.423 "auth": { 00:50:56.423 "dhgroup": "ffdhe6144", 00:50:56.423 "digest": "sha512", 00:50:56.423 "state": "completed" 00:50:56.423 }, 00:50:56.423 "cntlid": 131, 00:50:56.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:56.423 "listen_address": { 00:50:56.423 "adrfam": "IPv4", 00:50:56.423 "traddr": "10.0.0.3", 00:50:56.423 "trsvcid": "4420", 00:50:56.423 "trtype": "TCP" 00:50:56.423 }, 00:50:56.423 "peer_address": { 00:50:56.423 "adrfam": "IPv4", 00:50:56.423 "traddr": "10.0.0.1", 00:50:56.423 "trsvcid": "33738", 00:50:56.423 "trtype": "TCP" 00:50:56.423 }, 00:50:56.423 "qid": 0, 00:50:56.423 "state": "enabled", 00:50:56.423 "thread": "nvmf_tgt_poll_group_000" 00:50:56.423 } 00:50:56.423 ]' 00:50:56.423 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:56.423 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:56.423 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:56.423 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:50:56.423 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:56.423 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:56.423 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:56.423 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:56.683 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:56.683 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:50:57.252 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:57.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:57.252 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:57.252 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:57.252 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:57.252 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:57.252 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:57.252 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:57.252 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:57.511 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:57.770 00:50:58.071 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:58.071 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:58.071 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:58.071 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:58.071 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:58.071 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.071 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:58.071 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.071 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:50:58.071 { 00:50:58.071 "auth": { 00:50:58.071 "dhgroup": "ffdhe6144", 00:50:58.071 "digest": "sha512", 00:50:58.071 "state": "completed" 00:50:58.071 }, 00:50:58.071 "cntlid": 133, 00:50:58.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:50:58.071 "listen_address": { 00:50:58.071 "adrfam": "IPv4", 00:50:58.071 "traddr": "10.0.0.3", 00:50:58.071 "trsvcid": "4420", 00:50:58.071 "trtype": "TCP" 00:50:58.071 }, 00:50:58.071 "peer_address": { 00:50:58.071 "adrfam": "IPv4", 00:50:58.071 "traddr": "10.0.0.1", 00:50:58.071 "trsvcid": "33768", 00:50:58.071 "trtype": "TCP" 00:50:58.071 }, 00:50:58.071 "qid": 0, 00:50:58.071 "state": "enabled", 00:50:58.071 "thread": "nvmf_tgt_poll_group_000" 00:50:58.071 } 00:50:58.071 ]' 00:50:58.072 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:50:58.351 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:50:58.351 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:50:58.351 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:50:58.351 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:50:58.351 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:50:58.351 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:50:58.351 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:50:58.610 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:58.610 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:50:59.177 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:50:59.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:50:59.177 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:50:59.177 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:59.177 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:59.178 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:59.178 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:50:59.178 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:59.178 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:59.438 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:50:59.697 00:50:59.957 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:50:59.957 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:50:59.957 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:50:59.957 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:59.957 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:50:59.957 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:59.957 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:00.215 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:00.215 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:00.215 { 00:51:00.215 "auth": { 00:51:00.215 "dhgroup": "ffdhe6144", 00:51:00.215 "digest": "sha512", 00:51:00.215 "state": "completed" 00:51:00.215 }, 00:51:00.215 "cntlid": 135, 00:51:00.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:00.215 "listen_address": { 00:51:00.215 "adrfam": "IPv4", 00:51:00.215 "traddr": "10.0.0.3", 00:51:00.215 "trsvcid": "4420", 00:51:00.215 "trtype": "TCP" 00:51:00.215 }, 00:51:00.215 "peer_address": { 00:51:00.215 "adrfam": "IPv4", 00:51:00.215 "traddr": "10.0.0.1", 00:51:00.215 "trsvcid": "43740", 00:51:00.215 "trtype": "TCP" 00:51:00.215 }, 00:51:00.215 "qid": 0, 00:51:00.215 "state": "enabled", 00:51:00.215 "thread": "nvmf_tgt_poll_group_000" 00:51:00.215 } 00:51:00.215 ]' 00:51:00.215 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:00.215 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:51:00.215 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:00.215 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:51:00.215 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:00.215 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:00.215 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:00.215 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:00.475 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:51:00.475 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:51:01.043 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:01.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:01.043 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:01.043 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:01.043 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:01.043 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:01.043 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:51:01.043 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:01.043 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:01.043 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:01.301 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:01.869 00:51:01.869 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:01.869 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:01.869 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:02.129 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:02.129 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:02.129 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:02.129 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:02.129 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:02.129 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:02.129 { 00:51:02.129 "auth": { 00:51:02.129 "dhgroup": "ffdhe8192", 00:51:02.129 "digest": "sha512", 00:51:02.129 "state": "completed" 00:51:02.129 }, 00:51:02.129 "cntlid": 137, 00:51:02.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:02.129 "listen_address": { 00:51:02.129 "adrfam": "IPv4", 00:51:02.129 "traddr": "10.0.0.3", 00:51:02.129 "trsvcid": "4420", 00:51:02.129 "trtype": "TCP" 00:51:02.129 }, 00:51:02.129 "peer_address": { 00:51:02.129 "adrfam": "IPv4", 00:51:02.129 "traddr": "10.0.0.1", 00:51:02.129 "trsvcid": "43778", 00:51:02.129 "trtype": "TCP" 00:51:02.129 }, 00:51:02.129 "qid": 0, 00:51:02.129 "state": "enabled", 00:51:02.129 "thread": "nvmf_tgt_poll_group_000" 00:51:02.129 } 00:51:02.129 ]' 00:51:02.129 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:02.388 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:51:02.388 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:02.388 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:02.388 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:02.388 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:02.388 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:02.388 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:02.647 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:51:02.647 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:51:03.214 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:03.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:03.214 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:03.214 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:03.214 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:03.215 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:03.215 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:03.215 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:03.215 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:03.473 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:04.041 00:51:04.041 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:04.041 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:04.041 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:04.311 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:04.311 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:04.311 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:04.311 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:04.311 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:04.311 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:04.311 { 00:51:04.311 "auth": { 00:51:04.311 "dhgroup": "ffdhe8192", 00:51:04.311 "digest": "sha512", 00:51:04.311 "state": "completed" 00:51:04.311 }, 00:51:04.311 "cntlid": 139, 00:51:04.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:04.311 "listen_address": { 00:51:04.311 "adrfam": "IPv4", 00:51:04.311 "traddr": "10.0.0.3", 00:51:04.311 "trsvcid": "4420", 00:51:04.311 "trtype": "TCP" 00:51:04.311 }, 00:51:04.311 "peer_address": { 00:51:04.311 "adrfam": "IPv4", 00:51:04.311 "traddr": "10.0.0.1", 00:51:04.311 "trsvcid": "43808", 00:51:04.311 "trtype": "TCP" 00:51:04.311 }, 00:51:04.311 "qid": 0, 00:51:04.312 "state": "enabled", 00:51:04.312 "thread": "nvmf_tgt_poll_group_000" 00:51:04.312 } 00:51:04.312 ]' 00:51:04.312 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:04.584 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:51:04.584 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:04.584 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:04.584 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:04.584 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:04.584 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:04.584 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:04.843 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:51:04.843 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: --dhchap-ctrl-secret DHHC-1:02:NjgzMzg2ODBmNDJhNTVkZGE0ODNkMjFmOTZjZmZiNjJmMzQ1YzU1ZGM0NThjMzg17uqw5g==: 00:51:05.411 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:05.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:05.411 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:05.411 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:05.411 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:05.411 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:05.411 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:05.411 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:05.411 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:05.671 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:06.238 00:51:06.238 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:06.238 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:06.238 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:06.497 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:06.497 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:06.497 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:06.497 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:06.497 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:06.497 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:06.497 { 00:51:06.497 "auth": { 00:51:06.497 "dhgroup": "ffdhe8192", 00:51:06.497 "digest": "sha512", 00:51:06.497 "state": "completed" 00:51:06.497 }, 00:51:06.497 "cntlid": 141, 00:51:06.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:06.497 "listen_address": { 00:51:06.497 "adrfam": "IPv4", 00:51:06.497 "traddr": "10.0.0.3", 00:51:06.497 "trsvcid": "4420", 00:51:06.497 "trtype": "TCP" 00:51:06.497 }, 00:51:06.497 "peer_address": { 00:51:06.497 "adrfam": "IPv4", 00:51:06.497 "traddr": "10.0.0.1", 00:51:06.497 "trsvcid": "43830", 00:51:06.497 "trtype": "TCP" 00:51:06.497 }, 00:51:06.497 "qid": 0, 00:51:06.497 "state": "enabled", 00:51:06.497 "thread": "nvmf_tgt_poll_group_000" 00:51:06.497 } 00:51:06.497 ]' 00:51:06.497 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:06.497 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:51:06.497 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:06.497 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:06.497 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:06.756 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:06.756 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:06.756 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:07.016 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:51:07.016 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:01:ODViMTIwNGVkZjMyNzdhNGMyMGExZWNhMjY1MmZhNTAmni37: 00:51:07.584 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:07.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:07.584 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:07.584 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:07.584 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:07.584 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:07.584 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:51:07.584 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:07.584 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:07.843 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:51:07.843 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:07.843 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:51:07.844 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:07.844 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:51:07.844 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:07.844 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:51:07.844 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:07.844 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:07.844 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:07.844 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:51:07.844 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:07.844 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:08.426 00:51:08.426 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:08.426 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:08.426 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:08.685 { 00:51:08.685 "auth": { 00:51:08.685 "dhgroup": "ffdhe8192", 00:51:08.685 "digest": "sha512", 00:51:08.685 "state": "completed" 00:51:08.685 }, 00:51:08.685 "cntlid": 143, 00:51:08.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:08.685 "listen_address": { 00:51:08.685 "adrfam": "IPv4", 00:51:08.685 "traddr": "10.0.0.3", 00:51:08.685 "trsvcid": "4420", 00:51:08.685 "trtype": "TCP" 00:51:08.685 }, 00:51:08.685 "peer_address": { 00:51:08.685 "adrfam": "IPv4", 00:51:08.685 "traddr": "10.0.0.1", 00:51:08.685 "trsvcid": "43854", 00:51:08.685 "trtype": "TCP" 00:51:08.685 }, 00:51:08.685 "qid": 0, 00:51:08.685 "state": "enabled", 00:51:08.685 "thread": "nvmf_tgt_poll_group_000" 00:51:08.685 } 00:51:08.685 ]' 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:08.685 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:08.945 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:51:08.945 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:51:09.514 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:09.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:09.514 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:09.514 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:09.514 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:09.514 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:09.514 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:51:09.514 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:51:09.514 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:51:09.514 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:51:09.514 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:51:09.514 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:51:09.773 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:51:09.773 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:09.773 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:51:09.773 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:09.773 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:51:09.773 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:09.773 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:09.774 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:09.774 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:09.774 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:09.774 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:09.774 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:09.774 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:10.341 00:51:10.598 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:10.598 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:10.598 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:10.855 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:10.855 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:10.855 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:10.855 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:10.855 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:10.855 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:10.855 { 00:51:10.855 "auth": { 00:51:10.855 "dhgroup": "ffdhe8192", 00:51:10.855 "digest": "sha512", 00:51:10.855 "state": "completed" 00:51:10.855 }, 00:51:10.855 "cntlid": 145, 00:51:10.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:10.855 "listen_address": { 00:51:10.855 "adrfam": "IPv4", 00:51:10.855 "traddr": "10.0.0.3", 00:51:10.855 "trsvcid": "4420", 00:51:10.855 "trtype": "TCP" 00:51:10.855 }, 00:51:10.855 "peer_address": { 00:51:10.855 "adrfam": "IPv4", 00:51:10.855 "traddr": "10.0.0.1", 00:51:10.855 "trsvcid": "49340", 00:51:10.855 "trtype": "TCP" 00:51:10.855 }, 00:51:10.855 "qid": 0, 00:51:10.855 "state": "enabled", 00:51:10.855 "thread": "nvmf_tgt_poll_group_000" 00:51:10.855 } 00:51:10.855 ]' 00:51:10.855 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:10.856 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:51:10.856 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:10.856 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:10.856 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:10.856 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:10.856 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:10.856 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:11.114 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:51:11.115 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:00:Y2Y2MTNiNGEyNjQ3N2IzYmM4OTZiOTk1MzFlZDcyNGFkNTg2MzYyNGM5ZmFmYTcyFAIeyg==: --dhchap-ctrl-secret DHHC-1:03:ZjllYjRhMzhkZmQwMmY1MDMyMGQ2MzhkMzBjOTM2MjYyMTkyMGIyNWIwYjM4NWI1MTE1NGRhNDcxYjM1NWI1MOKilps=: 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:11.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:51:11.682 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:51:12.251 2024/12/09 10:02:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:51:12.251 request: 00:51:12.251 { 00:51:12.251 "method": "bdev_nvme_attach_controller", 00:51:12.251 "params": { 00:51:12.251 "name": "nvme0", 00:51:12.251 "trtype": "tcp", 00:51:12.251 "traddr": "10.0.0.3", 00:51:12.251 "adrfam": "ipv4", 00:51:12.251 "trsvcid": "4420", 00:51:12.251 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:51:12.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:12.251 "prchk_reftag": false, 00:51:12.251 "prchk_guard": false, 00:51:12.251 "hdgst": false, 00:51:12.251 "ddgst": false, 00:51:12.251 "dhchap_key": "key2", 00:51:12.251 "allow_unrecognized_csi": false 00:51:12.251 } 00:51:12.251 } 00:51:12.251 Got JSON-RPC error response 00:51:12.251 GoRPCClient: error on JSON-RPC call 00:51:12.251 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:51:12.251 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:51:12.251 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:51:12.251 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:51:12.251 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:12.251 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:12.251 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:51:12.511 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:51:13.116 2024/12/09 10:02:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:51:13.116 request: 00:51:13.116 { 00:51:13.116 "method": "bdev_nvme_attach_controller", 00:51:13.116 "params": { 00:51:13.116 "name": "nvme0", 00:51:13.116 "trtype": "tcp", 00:51:13.116 "traddr": "10.0.0.3", 00:51:13.116 "adrfam": "ipv4", 00:51:13.116 "trsvcid": "4420", 00:51:13.116 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:51:13.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:13.116 "prchk_reftag": false, 00:51:13.116 "prchk_guard": false, 00:51:13.116 "hdgst": false, 00:51:13.116 "ddgst": false, 00:51:13.116 "dhchap_key": "key1", 00:51:13.116 "dhchap_ctrlr_key": "ckey2", 00:51:13.116 "allow_unrecognized_csi": false 00:51:13.116 } 00:51:13.116 } 00:51:13.116 Got JSON-RPC error response 00:51:13.116 GoRPCClient: error on JSON-RPC call 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:13.116 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:13.688 2024/12/09 10:02:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:51:13.688 request: 00:51:13.688 { 00:51:13.688 "method": "bdev_nvme_attach_controller", 00:51:13.688 "params": { 00:51:13.688 "name": "nvme0", 00:51:13.688 "trtype": "tcp", 00:51:13.688 "traddr": "10.0.0.3", 00:51:13.688 "adrfam": "ipv4", 00:51:13.688 "trsvcid": "4420", 00:51:13.688 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:51:13.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:13.688 "prchk_reftag": false, 00:51:13.688 "prchk_guard": false, 00:51:13.688 "hdgst": false, 00:51:13.689 "ddgst": false, 00:51:13.689 "dhchap_key": "key1", 00:51:13.689 "dhchap_ctrlr_key": "ckey1", 00:51:13.689 "allow_unrecognized_csi": false 00:51:13.689 } 00:51:13.689 } 00:51:13.689 Got JSON-RPC error response 00:51:13.689 GoRPCClient: error on JSON-RPC call 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 77238 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 77238 ']' 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 77238 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77238 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:51:13.689 killing process with pid 77238 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77238' 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 77238 00:51:13.689 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 77238 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=82006 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 82006 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 82006 ']' 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:13.948 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 82006 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 82006 ']' 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:14.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:14.886 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:15.145 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:15.145 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:51:15.145 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:51:15.145 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:15.145 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:15.405 null0 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rUD 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.O5g ]] 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O5g 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.mYg 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:15.405 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.H0y ]] 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.H0y 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.pEv 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Fct ]] 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fct 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.3iI 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:15.406 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:16.344 nvme0n1 00:51:16.344 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:51:16.344 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:16.345 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:51:16.605 { 00:51:16.605 "auth": { 00:51:16.605 "dhgroup": "ffdhe8192", 00:51:16.605 "digest": "sha512", 00:51:16.605 "state": "completed" 00:51:16.605 }, 00:51:16.605 "cntlid": 1, 00:51:16.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:16.605 "listen_address": { 00:51:16.605 "adrfam": "IPv4", 00:51:16.605 "traddr": "10.0.0.3", 00:51:16.605 "trsvcid": "4420", 00:51:16.605 "trtype": "TCP" 00:51:16.605 }, 00:51:16.605 "peer_address": { 00:51:16.605 "adrfam": "IPv4", 00:51:16.605 "traddr": "10.0.0.1", 00:51:16.605 "trsvcid": "49398", 00:51:16.605 "trtype": "TCP" 00:51:16.605 }, 00:51:16.605 "qid": 0, 00:51:16.605 "state": "enabled", 00:51:16.605 "thread": "nvmf_tgt_poll_group_000" 00:51:16.605 } 00:51:16.605 ]' 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:16.605 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:16.865 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:51:16.865 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:51:17.434 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:17.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:17.695 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:17.695 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:17.695 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:17.695 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:17.695 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key3 00:51:17.695 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:17.695 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:17.695 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:17.695 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:51:17.695 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:51:17.955 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:51:17.955 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:51:17.955 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:51:17.955 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:51:17.955 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:17.955 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:51:17.955 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:17.955 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:51:17.955 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:17.955 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:17.955 2024/12/09 10:02:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:51:17.955 request: 00:51:17.955 { 00:51:17.955 "method": "bdev_nvme_attach_controller", 00:51:17.955 "params": { 00:51:17.955 "name": "nvme0", 00:51:17.955 "trtype": "tcp", 00:51:17.955 "traddr": "10.0.0.3", 00:51:17.955 "adrfam": "ipv4", 00:51:17.955 "trsvcid": "4420", 00:51:17.955 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:51:17.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:17.955 "prchk_reftag": false, 00:51:17.955 "prchk_guard": false, 00:51:17.955 "hdgst": false, 00:51:17.955 "ddgst": false, 00:51:17.955 "dhchap_key": "key3", 00:51:17.955 "allow_unrecognized_csi": false 00:51:17.955 } 00:51:17.955 } 00:51:17.955 Got JSON-RPC error response 00:51:17.955 GoRPCClient: error on JSON-RPC call 00:51:18.214 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:51:18.214 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:51:18.214 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:51:18.214 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:18.214 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:51:18.473 2024/12/09 10:02:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:51:18.473 request: 00:51:18.473 { 00:51:18.473 "method": "bdev_nvme_attach_controller", 00:51:18.473 "params": { 00:51:18.473 "name": "nvme0", 00:51:18.473 "trtype": "tcp", 00:51:18.473 "traddr": "10.0.0.3", 00:51:18.473 "adrfam": "ipv4", 00:51:18.473 "trsvcid": "4420", 00:51:18.473 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:51:18.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:18.473 "prchk_reftag": false, 00:51:18.473 "prchk_guard": false, 00:51:18.473 "hdgst": false, 00:51:18.473 "ddgst": false, 00:51:18.473 "dhchap_key": "key3", 00:51:18.473 "allow_unrecognized_csi": false 00:51:18.473 } 00:51:18.473 } 00:51:18.473 Got JSON-RPC error response 00:51:18.473 GoRPCClient: error on JSON-RPC call 00:51:18.473 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:51:18.473 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:51:18.473 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:51:18.473 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:51:18.473 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:51:18.473 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:51:18.733 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:51:18.733 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:51:18.733 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:51:18.733 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:51:18.733 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:18.733 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:18.733 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:18.733 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:18.733 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:18.733 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:18.733 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:18.993 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:18.993 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:51:18.993 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:51:18.993 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:51:18.993 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:51:18.993 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:18.993 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:51:18.993 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:18.993 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:51:18.993 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:51:18.993 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:51:19.253 2024/12/09 10:02:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:51:19.253 request: 00:51:19.253 { 00:51:19.253 "method": "bdev_nvme_attach_controller", 00:51:19.253 "params": { 00:51:19.253 "name": "nvme0", 00:51:19.253 "trtype": "tcp", 00:51:19.253 "traddr": "10.0.0.3", 00:51:19.253 "adrfam": "ipv4", 00:51:19.253 "trsvcid": "4420", 00:51:19.253 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:51:19.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:19.253 "prchk_reftag": false, 00:51:19.253 "prchk_guard": false, 00:51:19.253 "hdgst": false, 00:51:19.253 "ddgst": false, 00:51:19.253 "dhchap_key": "key0", 00:51:19.253 "dhchap_ctrlr_key": "key1", 00:51:19.253 "allow_unrecognized_csi": false 00:51:19.253 } 00:51:19.253 } 00:51:19.253 Got JSON-RPC error response 00:51:19.253 GoRPCClient: error on JSON-RPC call 00:51:19.253 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:51:19.253 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:51:19.253 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:51:19.253 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:51:19.253 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:51:19.253 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:51:19.253 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:51:19.512 nvme0n1 00:51:19.512 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:51:19.512 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:51:19.512 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:19.771 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:19.771 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:19.771 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:20.031 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 00:51:20.031 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:20.031 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:20.031 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:20.031 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:51:20.031 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:51:20.031 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:51:20.969 nvme0n1 00:51:20.969 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:51:20.969 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:51:20.969 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:21.228 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:21.228 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key key3 00:51:21.228 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:21.228 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:21.228 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:21.228 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:51:21.228 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:51:21.228 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:21.487 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:21.487 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:51:21.487 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid 4e796c80-f069-427b-b4d9-5ff46fca0541 -l 0 --dhchap-secret DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: --dhchap-ctrl-secret DHHC-1:03:ZjFmMGI4YjY0MmI3ZGYxOWM3NjM0MGE3MzhhYTk1YWZlODQ2MTI3NjNhMWVlNDM5ZTQ3NjZmZDM5MzkwZWRhMV79NPM=: 00:51:22.055 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:51:22.055 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:51:22.055 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:51:22.055 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:51:22.055 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:51:22.055 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:51:22.055 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:51:22.055 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:22.055 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:22.319 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:51:22.319 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:51:22.319 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:51:22.319 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:51:22.319 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:22.319 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:51:22.319 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:22.319 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:51:22.319 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:51:22.319 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:51:22.891 2024/12/09 10:02:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:51:22.891 request: 00:51:22.891 { 00:51:22.891 "method": "bdev_nvme_attach_controller", 00:51:22.891 "params": { 00:51:22.891 "name": "nvme0", 00:51:22.891 "trtype": "tcp", 00:51:22.891 "traddr": "10.0.0.3", 00:51:22.891 "adrfam": "ipv4", 00:51:22.891 "trsvcid": "4420", 00:51:22.891 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:51:22.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541", 00:51:22.891 "prchk_reftag": false, 00:51:22.891 "prchk_guard": false, 00:51:22.891 "hdgst": false, 00:51:22.891 "ddgst": false, 00:51:22.891 "dhchap_key": "key1", 00:51:22.891 "allow_unrecognized_csi": false 00:51:22.891 } 00:51:22.891 } 00:51:22.891 Got JSON-RPC error response 00:51:22.891 GoRPCClient: error on JSON-RPC call 00:51:22.891 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:51:22.891 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:51:22.891 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:51:22.891 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:51:22.891 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:51:22.891 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:51:22.891 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:51:23.828 nvme0n1 00:51:23.828 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:51:23.828 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:23.828 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:51:24.086 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:24.086 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:24.086 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:24.346 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:24.346 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:24.346 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:24.346 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:24.346 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:51:24.346 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:51:24.346 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:51:24.605 nvme0n1 00:51:24.605 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:51:24.605 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:51:24.605 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:24.864 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:24.864 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:51:24.864 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key key3 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: '' 2s 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: ]] 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YWFjNGUxYjdmNjJiMDkyZTRmOGZkMmEyN2FiOTI0MGG2/9yc: 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:51:25.124 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:51:27.029 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:51:27.029 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:51:27.029 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:51:27.029 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:51:27.029 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:51:27.029 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key1 --dhchap-ctrlr-key key2 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: 2s 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: ]] 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzNiODQ1OTIxNWM2YWU1ZDlhYWZlYTM4YTY1Y2IzMGI2ZWQ0NzI5ZDU3MWE2NDkyk7iGxw==: 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:51:27.288 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:51:29.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key key1 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:51:29.203 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:51:30.140 nvme0n1 00:51:30.140 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key key3 00:51:30.140 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.140 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:30.140 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.140 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:51:30.140 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:51:30.707 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:51:30.707 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:30.707 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:51:30.967 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:30.967 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:30.967 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.967 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:30.967 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.967 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:51:30.967 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:51:31.226 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:51:31.226 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:51:31.226 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key key3 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:51:31.486 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:51:32.055 2024/12/09 10:02:38 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:51:32.055 request: 00:51:32.055 { 00:51:32.055 "method": "bdev_nvme_set_keys", 00:51:32.055 "params": { 00:51:32.055 "name": "nvme0", 00:51:32.055 "dhchap_key": "key1", 00:51:32.055 "dhchap_ctrlr_key": "key3" 00:51:32.055 } 00:51:32.055 } 00:51:32.055 Got JSON-RPC error response 00:51:32.055 GoRPCClient: error on JSON-RPC call 00:51:32.055 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:51:32.055 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:51:32.055 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:51:32.055 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:51:32.055 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:51:32.055 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:32.055 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:51:32.314 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:51:32.314 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:51:33.250 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:51:33.250 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:33.250 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:51:33.509 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:51:33.509 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key0 --dhchap-ctrlr-key key1 00:51:33.509 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:33.509 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:33.509 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:33.509 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:51:33.509 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:51:33.509 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:51:34.447 nvme0n1 00:51:34.447 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --dhchap-key key2 --dhchap-ctrlr-key key3 00:51:34.447 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.447 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:34.448 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.448 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:51:34.448 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:51:34.448 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:51:34.448 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:51:34.448 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:34.448 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:51:34.448 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:51:34.448 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:51:34.448 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:51:35.016 2024/12/09 10:02:41 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:51:35.016 request: 00:51:35.016 { 00:51:35.016 "method": "bdev_nvme_set_keys", 00:51:35.016 "params": { 00:51:35.016 "name": "nvme0", 00:51:35.016 "dhchap_key": "key2", 00:51:35.016 "dhchap_ctrlr_key": "key0" 00:51:35.016 } 00:51:35.016 } 00:51:35.016 Got JSON-RPC error response 00:51:35.016 GoRPCClient: error on JSON-RPC call 00:51:35.016 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:51:35.016 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:51:35.016 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:51:35.016 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:51:35.016 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:51:35.016 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:35.016 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:51:35.275 10:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:51:35.275 10:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:51:36.213 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:51:36.213 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:51:36.213 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77282 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 77282 ']' 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 77282 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77282 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77282' 00:51:36.470 killing process with pid 77282 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 77282 00:51:36.470 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 77282 00:51:36.727 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:51:36.727 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:51:36.727 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:51:36.727 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:51:36.727 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:51:36.727 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:51:36.727 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:51:36.727 rmmod nvme_tcp 00:51:36.727 rmmod nvme_fabrics 00:51:36.986 rmmod nvme_keyring 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 82006 ']' 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 82006 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 82006 ']' 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 82006 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82006 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82006' 00:51:36.986 killing process with pid 82006 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 82006 00:51:36.986 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 82006 00:51:36.987 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:51:36.987 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:51:36.987 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:51:36.987 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:51:36.987 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:51:36.987 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:51:36.987 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:51:36.987 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:51:36.987 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:51:36.987 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:51:37.244 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:51:37.244 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:51:37.244 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:51:37.244 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:51:37.244 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:51:37.244 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:51:37.244 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:51:37.244 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:51:37.244 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:51:37.244 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:51:37.244 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:37.244 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:37.245 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:51:37.245 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:37.245 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:37.245 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:37.245 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:51:37.245 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.rUD /tmp/spdk.key-sha256.mYg /tmp/spdk.key-sha384.pEv /tmp/spdk.key-sha512.3iI /tmp/spdk.key-sha512.O5g /tmp/spdk.key-sha384.H0y /tmp/spdk.key-sha256.Fct '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:51:37.245 00:51:37.245 real 2m55.392s 00:51:37.245 user 7m0.687s 00:51:37.245 sys 0m23.645s 00:51:37.245 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:37.245 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:51:37.245 ************************************ 00:51:37.245 END TEST nvmf_auth_target 00:51:37.245 ************************************ 00:51:37.503 10:02:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:51:37.503 10:02:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:51:37.503 10:02:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:51:37.503 10:02:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:37.503 10:02:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:51:37.503 ************************************ 00:51:37.503 START TEST nvmf_bdevio_no_huge 00:51:37.503 ************************************ 00:51:37.503 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:51:37.503 * Looking for test storage... 00:51:37.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:51:37.503 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:51:37.503 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:51:37.503 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:51:37.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:37.763 --rc genhtml_branch_coverage=1 00:51:37.763 --rc genhtml_function_coverage=1 00:51:37.763 --rc genhtml_legend=1 00:51:37.763 --rc geninfo_all_blocks=1 00:51:37.763 --rc geninfo_unexecuted_blocks=1 00:51:37.763 00:51:37.763 ' 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:51:37.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:37.763 --rc genhtml_branch_coverage=1 00:51:37.763 --rc genhtml_function_coverage=1 00:51:37.763 --rc genhtml_legend=1 00:51:37.763 --rc geninfo_all_blocks=1 00:51:37.763 --rc geninfo_unexecuted_blocks=1 00:51:37.763 00:51:37.763 ' 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:51:37.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:37.763 --rc genhtml_branch_coverage=1 00:51:37.763 --rc genhtml_function_coverage=1 00:51:37.763 --rc genhtml_legend=1 00:51:37.763 --rc geninfo_all_blocks=1 00:51:37.763 --rc geninfo_unexecuted_blocks=1 00:51:37.763 00:51:37.763 ' 00:51:37.763 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:51:37.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:37.763 --rc genhtml_branch_coverage=1 00:51:37.763 --rc genhtml_function_coverage=1 00:51:37.763 --rc genhtml_legend=1 00:51:37.763 --rc geninfo_all_blocks=1 00:51:37.764 --rc geninfo_unexecuted_blocks=1 00:51:37.764 00:51:37.764 ' 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:51:37.764 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:51:37.764 Cannot find device "nvmf_init_br" 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:51:37.764 Cannot find device "nvmf_init_br2" 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:51:37.764 Cannot find device "nvmf_tgt_br" 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:51:37.764 Cannot find device "nvmf_tgt_br2" 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:51:37.764 Cannot find device "nvmf_init_br" 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:51:37.764 Cannot find device "nvmf_init_br2" 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:51:37.764 Cannot find device "nvmf_tgt_br" 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:51:37.764 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:51:37.764 Cannot find device "nvmf_tgt_br2" 00:51:37.765 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:51:37.765 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:51:37.765 Cannot find device "nvmf_br" 00:51:37.765 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:51:37.765 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:51:37.765 Cannot find device "nvmf_init_if" 00:51:37.765 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:51:37.765 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:51:37.765 Cannot find device "nvmf_init_if2" 00:51:37.765 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:51:37.765 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:37.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:37.765 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:51:37.765 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:37.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:37.765 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:51:37.765 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:51:38.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:51:38.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.156 ms 00:51:38.024 00:51:38.024 --- 10.0.0.3 ping statistics --- 00:51:38.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:38.024 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:51:38.024 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:51:38.024 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.142 ms 00:51:38.024 00:51:38.024 --- 10.0.0.4 ping statistics --- 00:51:38.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:38.024 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:51:38.024 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:51:38.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:38.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:51:38.024 00:51:38.024 --- 10.0.0.1 ping statistics --- 00:51:38.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:38.024 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:51:38.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:38.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:51:38.024 00:51:38.024 --- 10.0.0.2 ping statistics --- 00:51:38.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:38.024 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=82866 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 82866 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 82866 ']' 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:38.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:38.024 10:02:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:51:38.282 [2024-12-09 10:02:45.115112] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:51:38.282 [2024-12-09 10:02:45.115187] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:51:38.282 [2024-12-09 10:02:45.273318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:51:38.541 [2024-12-09 10:02:45.329493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:38.541 [2024-12-09 10:02:45.329551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:38.541 [2024-12-09 10:02:45.329558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:38.541 [2024-12-09 10:02:45.329563] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:38.541 [2024-12-09 10:02:45.329567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:38.541 [2024-12-09 10:02:45.330116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:51:38.542 [2024-12-09 10:02:45.330311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:51:38.542 [2024-12-09 10:02:45.330416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:51:38.542 [2024-12-09 10:02:45.330421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:51:39.109 [2024-12-09 10:02:46.075010] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:51:39.109 Malloc0 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:39.109 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:51:39.110 [2024-12-09 10:02:46.127922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:51:39.110 { 00:51:39.110 "params": { 00:51:39.110 "name": "Nvme$subsystem", 00:51:39.110 "trtype": "$TEST_TRANSPORT", 00:51:39.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:39.110 "adrfam": "ipv4", 00:51:39.110 "trsvcid": "$NVMF_PORT", 00:51:39.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:39.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:39.110 "hdgst": ${hdgst:-false}, 00:51:39.110 "ddgst": ${ddgst:-false} 00:51:39.110 }, 00:51:39.110 "method": "bdev_nvme_attach_controller" 00:51:39.110 } 00:51:39.110 EOF 00:51:39.110 )") 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:51:39.110 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:51:39.110 "params": { 00:51:39.110 "name": "Nvme1", 00:51:39.110 "trtype": "tcp", 00:51:39.110 "traddr": "10.0.0.3", 00:51:39.110 "adrfam": "ipv4", 00:51:39.110 "trsvcid": "4420", 00:51:39.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:51:39.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:51:39.110 "hdgst": false, 00:51:39.110 "ddgst": false 00:51:39.110 }, 00:51:39.110 "method": "bdev_nvme_attach_controller" 00:51:39.110 }' 00:51:39.409 [2024-12-09 10:02:46.185247] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:51:39.409 [2024-12-09 10:02:46.185313] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82919 ] 00:51:39.409 [2024-12-09 10:02:46.340312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:51:39.409 [2024-12-09 10:02:46.398417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:39.409 [2024-12-09 10:02:46.398554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:39.409 [2024-12-09 10:02:46.398555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:51:39.669 I/O targets: 00:51:39.669 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:51:39.669 00:51:39.669 00:51:39.669 CUnit - A unit testing framework for C - Version 2.1-3 00:51:39.669 http://cunit.sourceforge.net/ 00:51:39.669 00:51:39.669 00:51:39.669 Suite: bdevio tests on: Nvme1n1 00:51:39.669 Test: blockdev write read block ...passed 00:51:39.669 Test: blockdev write zeroes read block ...passed 00:51:39.669 Test: blockdev write zeroes read no split ...passed 00:51:39.929 Test: blockdev write zeroes read split ...passed 00:51:39.929 Test: blockdev write zeroes read split partial ...passed 00:51:39.929 Test: blockdev reset ...[2024-12-09 10:02:46.742557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:51:39.929 [2024-12-09 10:02:46.742654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1eb0 (9): Bad file descriptor 00:51:39.929 passed 00:51:39.929 Test: blockdev write read 8 blocks ...[2024-12-09 10:02:46.754424] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:51:39.929 passed 00:51:39.929 Test: blockdev write read size > 128k ...passed 00:51:39.929 Test: blockdev write read invalid size ...passed 00:51:39.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:51:39.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:51:39.929 Test: blockdev write read max offset ...passed 00:51:39.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:51:39.929 Test: blockdev writev readv 8 blocks ...passed 00:51:39.929 Test: blockdev writev readv 30 x 1block ...passed 00:51:39.929 Test: blockdev writev readv block ...passed 00:51:39.929 Test: blockdev writev readv size > 128k ...passed 00:51:39.929 Test: blockdev writev readv size > 128k in two iovs ...passed 00:51:39.929 Test: blockdev comparev and writev ...[2024-12-09 10:02:46.926174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:51:39.930 [2024-12-09 10:02:46.926208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:39.930 [2024-12-09 10:02:46.926222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:51:39.930 [2024-12-09 10:02:46.926230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:51:39.930 [2024-12-09 10:02:46.926469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:51:39.930 [2024-12-09 10:02:46.926492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:51:39.930 [2024-12-09 10:02:46.926505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:51:39.930 [2024-12-09 10:02:46.926511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:51:39.930 [2024-12-09 10:02:46.926744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:51:39.930 [2024-12-09 10:02:46.926759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:51:39.930 [2024-12-09 10:02:46.926771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:51:39.930 [2024-12-09 10:02:46.926778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:51:39.930 [2024-12-09 10:02:46.927033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:51:39.930 [2024-12-09 10:02:46.927048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:51:39.930 [2024-12-09 10:02:46.927059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:51:39.930 [2024-12-09 10:02:46.927066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:51:39.930 passed 00:51:40.189 Test: blockdev nvme passthru rw ...passed 00:51:40.189 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:02:47.009829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:51:40.189 [2024-12-09 10:02:47.009854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:51:40.189 passed 00:51:40.189 Test: blockdev nvme admin passthru ...[2024-12-09 10:02:47.009965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:51:40.189 [2024-12-09 10:02:47.009978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:51:40.189 [2024-12-09 10:02:47.010068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:51:40.189 [2024-12-09 10:02:47.010076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:51:40.189 [2024-12-09 10:02:47.010180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:51:40.189 [2024-12-09 10:02:47.010189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:51:40.189 passed 00:51:40.189 Test: blockdev copy ...passed 00:51:40.189 00:51:40.189 Run Summary: Type Total Ran Passed Failed Inactive 00:51:40.189 suites 1 1 n/a 0 0 00:51:40.189 tests 23 23 23 0 0 00:51:40.189 asserts 152 152 152 0 n/a 00:51:40.189 00:51:40.189 Elapsed time = 0.927 seconds 00:51:40.448 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:51:40.448 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:40.448 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:51:40.448 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:40.448 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:51:40.448 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:51:40.448 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:51:40.448 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:51:40.448 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:51:40.448 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:51:40.448 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:51:40.448 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:51:40.448 rmmod nvme_tcp 00:51:40.707 rmmod nvme_fabrics 00:51:40.707 rmmod nvme_keyring 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 82866 ']' 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 82866 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 82866 ']' 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 82866 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82866 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:51:40.707 killing process with pid 82866 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82866' 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 82866 00:51:40.707 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 82866 00:51:40.967 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:51:40.967 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:51:40.968 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:51:40.968 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:51:40.968 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:51:40.968 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:51:40.968 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:51:40.968 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:51:40.968 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:51:40.968 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:51:40.968 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:51:40.968 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:51:40.968 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:51:40.968 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:51:40.968 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:51:41.227 00:51:41.227 real 0m3.849s 00:51:41.227 user 0m12.483s 00:51:41.227 sys 0m1.467s 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:41.227 ************************************ 00:51:41.227 END TEST nvmf_bdevio_no_huge 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:51:41.227 ************************************ 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:41.227 10:02:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:51:41.227 ************************************ 00:51:41.227 START TEST nvmf_tls 00:51:41.227 ************************************ 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:51:41.488 * Looking for test storage... 00:51:41.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:51:41.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:41.488 --rc genhtml_branch_coverage=1 00:51:41.488 --rc genhtml_function_coverage=1 00:51:41.488 --rc genhtml_legend=1 00:51:41.488 --rc geninfo_all_blocks=1 00:51:41.488 --rc geninfo_unexecuted_blocks=1 00:51:41.488 00:51:41.488 ' 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:51:41.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:41.488 --rc genhtml_branch_coverage=1 00:51:41.488 --rc genhtml_function_coverage=1 00:51:41.488 --rc genhtml_legend=1 00:51:41.488 --rc geninfo_all_blocks=1 00:51:41.488 --rc geninfo_unexecuted_blocks=1 00:51:41.488 00:51:41.488 ' 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:51:41.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:41.488 --rc genhtml_branch_coverage=1 00:51:41.488 --rc genhtml_function_coverage=1 00:51:41.488 --rc genhtml_legend=1 00:51:41.488 --rc geninfo_all_blocks=1 00:51:41.488 --rc geninfo_unexecuted_blocks=1 00:51:41.488 00:51:41.488 ' 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:51:41.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:41.488 --rc genhtml_branch_coverage=1 00:51:41.488 --rc genhtml_function_coverage=1 00:51:41.488 --rc genhtml_legend=1 00:51:41.488 --rc geninfo_all_blocks=1 00:51:41.488 --rc geninfo_unexecuted_blocks=1 00:51:41.488 00:51:41.488 ' 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:41.488 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:51:41.489 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:41.489 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:51:41.748 Cannot find device "nvmf_init_br" 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:51:41.748 Cannot find device "nvmf_init_br2" 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:51:41.748 Cannot find device "nvmf_tgt_br" 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:51:41.748 Cannot find device "nvmf_tgt_br2" 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:51:41.748 Cannot find device "nvmf_init_br" 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:51:41.748 Cannot find device "nvmf_init_br2" 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:51:41.748 Cannot find device "nvmf_tgt_br" 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:51:41.748 Cannot find device "nvmf_tgt_br2" 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:51:41.748 Cannot find device "nvmf_br" 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:51:41.748 Cannot find device "nvmf_init_if" 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:51:41.748 Cannot find device "nvmf_init_if2" 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:41.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:41.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:51:41.748 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:51:42.007 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:51:42.007 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.180 ms 00:51:42.007 00:51:42.007 --- 10.0.0.3 ping statistics --- 00:51:42.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:42.007 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:51:42.007 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:51:42.007 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.120 ms 00:51:42.007 00:51:42.007 --- 10.0.0.4 ping statistics --- 00:51:42.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:42.007 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:51:42.007 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:51:42.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:42.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:51:42.007 00:51:42.008 --- 10.0.0.1 ping statistics --- 00:51:42.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:42.008 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:51:42.008 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:51:42.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:42.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:51:42.008 00:51:42.008 --- 10.0.0.2 ping statistics --- 00:51:42.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:42.008 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:42.008 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:51:42.267 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83157 00:51:42.267 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:51:42.267 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83157 00:51:42.267 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83157 ']' 00:51:42.267 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:42.267 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:42.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:42.267 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:42.267 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:42.267 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:51:42.267 [2024-12-09 10:02:49.106702] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:51:42.267 [2024-12-09 10:02:49.106789] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:42.267 [2024-12-09 10:02:49.256139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:42.267 [2024-12-09 10:02:49.304929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:42.267 [2024-12-09 10:02:49.304997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:42.267 [2024-12-09 10:02:49.305004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:42.267 [2024-12-09 10:02:49.305009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:42.267 [2024-12-09 10:02:49.305013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:42.267 [2024-12-09 10:02:49.305281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:43.202 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:43.202 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:51:43.202 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:51:43.202 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:51:43.202 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:51:43.202 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:43.202 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:51:43.202 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:51:43.460 true 00:51:43.460 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:51:43.460 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:51:43.719 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:51:43.719 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:51:43.719 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:51:43.978 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:51:43.978 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:51:44.236 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:51:44.236 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:51:44.236 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:51:44.495 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:51:44.495 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:51:44.753 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:51:44.753 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:51:44.753 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:51:44.753 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:51:45.017 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:51:45.018 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:51:45.018 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:51:45.282 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:51:45.282 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:51:45.542 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:51:45.542 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:51:45.542 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:51:45.801 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:51:45.801 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:51:46.060 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:51:46.060 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:51:46.060 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:51:46.060 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.XVieGvszS2 00:51:46.060 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:51:46.060 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.n53ckjTSbO 00:51:46.060 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:51:46.060 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:51:46.060 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.XVieGvszS2 00:51:46.060 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.n53ckjTSbO 00:51:46.060 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:51:46.320 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:51:46.888 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.XVieGvszS2 00:51:46.888 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XVieGvszS2 00:51:46.888 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:51:46.888 [2024-12-09 10:02:53.856738] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:46.888 10:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:51:47.147 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:51:47.406 [2024-12-09 10:02:54.327924] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:51:47.406 [2024-12-09 10:02:54.328134] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:51:47.406 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:51:47.665 malloc0 00:51:47.665 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:51:47.924 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XVieGvszS2 00:51:48.182 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:51:48.440 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.XVieGvszS2 00:52:00.663 Initializing NVMe Controllers 00:52:00.663 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:52:00.663 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:52:00.663 Initialization complete. Launching workers. 00:52:00.663 ======================================================== 00:52:00.663 Latency(us) 00:52:00.663 Device Information : IOPS MiB/s Average min max 00:52:00.663 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13518.86 52.81 4734.89 1076.12 16592.88 00:52:00.663 ======================================================== 00:52:00.663 Total : 13518.86 52.81 4734.89 1076.12 16592.88 00:52:00.663 00:52:00.663 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XVieGvszS2 00:52:00.663 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:52:00.663 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XVieGvszS2 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83527 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83527 /var/tmp/bdevperf.sock 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83527 ']' 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:00.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:00.664 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:00.664 [2024-12-09 10:03:05.553534] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:00.664 [2024-12-09 10:03:05.553618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83527 ] 00:52:00.664 [2024-12-09 10:03:05.695991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:00.664 [2024-12-09 10:03:05.750875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:52:00.664 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:00.664 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:00.664 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XVieGvszS2 00:52:00.664 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:52:00.664 [2024-12-09 10:03:06.881757] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:52:00.664 TLSTESTn1 00:52:00.664 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:52:00.664 Running I/O for 10 seconds... 00:52:02.168 5208.00 IOPS, 20.34 MiB/s [2024-12-09T10:03:10.150Z] 5237.50 IOPS, 20.46 MiB/s [2024-12-09T10:03:11.091Z] 5263.00 IOPS, 20.56 MiB/s [2024-12-09T10:03:12.475Z] 5333.25 IOPS, 20.83 MiB/s [2024-12-09T10:03:13.410Z] 5317.60 IOPS, 20.77 MiB/s [2024-12-09T10:03:14.344Z] 5282.00 IOPS, 20.63 MiB/s [2024-12-09T10:03:15.291Z] 5317.57 IOPS, 20.77 MiB/s [2024-12-09T10:03:16.233Z] 5329.25 IOPS, 20.82 MiB/s [2024-12-09T10:03:17.169Z] 5339.22 IOPS, 20.86 MiB/s [2024-12-09T10:03:17.169Z] 5348.60 IOPS, 20.89 MiB/s 00:52:10.125 Latency(us) 00:52:10.125 [2024-12-09T10:03:17.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:10.125 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:52:10.125 Verification LBA range: start 0x0 length 0x2000 00:52:10.125 TLSTESTn1 : 10.01 5353.78 20.91 0.00 0.00 23868.56 5294.39 31594.65 00:52:10.125 [2024-12-09T10:03:17.169Z] =================================================================================================================== 00:52:10.125 [2024-12-09T10:03:17.169Z] Total : 5353.78 20.91 0.00 0.00 23868.56 5294.39 31594.65 00:52:10.125 { 00:52:10.125 "results": [ 00:52:10.125 { 00:52:10.125 "job": "TLSTESTn1", 00:52:10.125 "core_mask": "0x4", 00:52:10.125 "workload": "verify", 00:52:10.125 "status": "finished", 00:52:10.125 "verify_range": { 00:52:10.125 "start": 0, 00:52:10.125 "length": 8192 00:52:10.125 }, 00:52:10.125 "queue_depth": 128, 00:52:10.125 "io_size": 4096, 00:52:10.125 "runtime": 10.013857, 00:52:10.125 "iops": 5353.781265300673, 00:52:10.125 "mibps": 20.913208067580754, 00:52:10.125 "io_failed": 0, 00:52:10.125 "io_timeout": 0, 00:52:10.125 "avg_latency_us": 23868.560566672324, 00:52:10.125 "min_latency_us": 5294.393013100436, 00:52:10.125 "max_latency_us": 31594.648034934497 00:52:10.125 } 00:52:10.125 ], 00:52:10.125 "core_count": 1 00:52:10.125 } 00:52:10.125 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:52:10.125 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83527 00:52:10.125 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83527 ']' 00:52:10.125 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83527 00:52:10.125 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:10.125 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:10.125 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83527 00:52:10.125 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:52:10.125 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:52:10.126 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83527' 00:52:10.126 killing process with pid 83527 00:52:10.126 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83527 00:52:10.126 Received shutdown signal, test time was about 10.000000 seconds 00:52:10.126 00:52:10.126 Latency(us) 00:52:10.126 [2024-12-09T10:03:17.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:10.126 [2024-12-09T10:03:17.170Z] =================================================================================================================== 00:52:10.126 [2024-12-09T10:03:17.170Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:10.126 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83527 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n53ckjTSbO 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n53ckjTSbO 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n53ckjTSbO 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.n53ckjTSbO 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83680 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83680 /var/tmp/bdevperf.sock 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83680 ']' 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:10.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:10.385 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:10.385 [2024-12-09 10:03:17.356344] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:10.385 [2024-12-09 10:03:17.356430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83680 ] 00:52:10.644 [2024-12-09 10:03:17.485040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:10.644 [2024-12-09 10:03:17.538293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:52:11.580 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:11.580 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:11.580 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n53ckjTSbO 00:52:11.580 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:52:11.840 [2024-12-09 10:03:18.719601] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:52:11.840 [2024-12-09 10:03:18.730734] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:52:11.840 [2024-12-09 10:03:18.730890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25620 (107): Transport endpoint is not connected 00:52:11.840 [2024-12-09 10:03:18.731880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25620 (9): Bad file descriptor 00:52:11.840 [2024-12-09 10:03:18.732877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:52:11.840 [2024-12-09 10:03:18.732890] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:52:11.840 [2024-12-09 10:03:18.732897] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:52:11.840 [2024-12-09 10:03:18.732907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:52:11.840 2024/12/09 10:03:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:52:11.840 request: 00:52:11.840 { 00:52:11.840 "method": "bdev_nvme_attach_controller", 00:52:11.840 "params": { 00:52:11.840 "name": "TLSTEST", 00:52:11.840 "trtype": "tcp", 00:52:11.840 "traddr": "10.0.0.3", 00:52:11.840 "adrfam": "ipv4", 00:52:11.840 "trsvcid": "4420", 00:52:11.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:52:11.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:52:11.840 "prchk_reftag": false, 00:52:11.840 "prchk_guard": false, 00:52:11.840 "hdgst": false, 00:52:11.840 "ddgst": false, 00:52:11.840 "psk": "key0", 00:52:11.840 "allow_unrecognized_csi": false 00:52:11.840 } 00:52:11.840 } 00:52:11.840 Got JSON-RPC error response 00:52:11.840 GoRPCClient: error on JSON-RPC call 00:52:11.840 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83680 00:52:11.840 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83680 ']' 00:52:11.840 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83680 00:52:11.840 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:11.840 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:11.840 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83680 00:52:11.840 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:52:11.840 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:52:11.840 killing process with pid 83680 00:52:11.840 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83680' 00:52:11.840 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83680 00:52:11.840 Received shutdown signal, test time was about 10.000000 seconds 00:52:11.840 00:52:11.840 Latency(us) 00:52:11.840 [2024-12-09T10:03:18.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:11.840 [2024-12-09T10:03:18.884Z] =================================================================================================================== 00:52:11.840 [2024-12-09T10:03:18.884Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:52:11.840 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83680 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XVieGvszS2 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XVieGvszS2 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XVieGvszS2 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XVieGvszS2 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83738 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83738 /var/tmp/bdevperf.sock 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83738 ']' 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:12.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:12.099 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:12.099 [2024-12-09 10:03:19.009476] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:12.099 [2024-12-09 10:03:19.009558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83738 ] 00:52:12.358 [2024-12-09 10:03:19.146873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:12.358 [2024-12-09 10:03:19.201436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:52:12.925 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:12.925 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:12.925 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XVieGvszS2 00:52:13.184 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:52:13.444 [2024-12-09 10:03:20.378977] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:52:13.444 [2024-12-09 10:03:20.387594] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:52:13.444 [2024-12-09 10:03:20.387629] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:52:13.444 [2024-12-09 10:03:20.387674] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:52:13.444 [2024-12-09 10:03:20.388436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f4620 (107): Transport endpoint is not connected 00:52:13.444 [2024-12-09 10:03:20.389420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f4620 (9): Bad file descriptor 00:52:13.444 [2024-12-09 10:03:20.390416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:52:13.444 [2024-12-09 10:03:20.390429] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:52:13.444 [2024-12-09 10:03:20.390435] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:52:13.444 [2024-12-09 10:03:20.390445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:52:13.444 2024/12/09 10:03:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:52:13.444 request: 00:52:13.444 { 00:52:13.444 "method": "bdev_nvme_attach_controller", 00:52:13.444 "params": { 00:52:13.444 "name": "TLSTEST", 00:52:13.444 "trtype": "tcp", 00:52:13.444 "traddr": "10.0.0.3", 00:52:13.444 "adrfam": "ipv4", 00:52:13.444 "trsvcid": "4420", 00:52:13.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:52:13.444 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:52:13.444 "prchk_reftag": false, 00:52:13.444 "prchk_guard": false, 00:52:13.444 "hdgst": false, 00:52:13.444 "ddgst": false, 00:52:13.444 "psk": "key0", 00:52:13.444 "allow_unrecognized_csi": false 00:52:13.444 } 00:52:13.444 } 00:52:13.444 Got JSON-RPC error response 00:52:13.444 GoRPCClient: error on JSON-RPC call 00:52:13.444 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83738 00:52:13.444 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83738 ']' 00:52:13.444 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83738 00:52:13.444 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:13.444 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:13.444 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83738 00:52:13.444 killing process with pid 83738 00:52:13.444 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:52:13.444 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:52:13.444 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83738' 00:52:13.444 Received shutdown signal, test time was about 10.000000 seconds 00:52:13.444 00:52:13.444 Latency(us) 00:52:13.444 [2024-12-09T10:03:20.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:13.444 [2024-12-09T10:03:20.488Z] =================================================================================================================== 00:52:13.444 [2024-12-09T10:03:20.488Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:52:13.444 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83738 00:52:13.444 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83738 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XVieGvszS2 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XVieGvszS2 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XVieGvszS2 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XVieGvszS2 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83785 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83785 /var/tmp/bdevperf.sock 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83785 ']' 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:13.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:13.705 10:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:52:13.705 [2024-12-09 10:03:20.666200] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:13.705 [2024-12-09 10:03:20.666273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83785 ] 00:52:13.964 [2024-12-09 10:03:20.794026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:13.964 [2024-12-09 10:03:20.847202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:52:14.901 10:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:14.901 10:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:14.901 10:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XVieGvszS2 00:52:14.901 10:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:52:15.160 [2024-12-09 10:03:22.007827] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:52:15.160 [2024-12-09 10:03:22.014761] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:52:15.160 [2024-12-09 10:03:22.014799] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:52:15.160 [2024-12-09 10:03:22.014845] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:52:15.160 [2024-12-09 10:03:22.015227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb620 (107): Transport endpoint is not connected 00:52:15.160 [2024-12-09 10:03:22.016217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb620 (9): Bad file descriptor 00:52:15.160 [2024-12-09 10:03:22.017212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:52:15.160 [2024-12-09 10:03:22.017230] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:52:15.160 [2024-12-09 10:03:22.017237] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:52:15.160 [2024-12-09 10:03:22.017247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:52:15.160 2024/12/09 10:03:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:52:15.160 request: 00:52:15.160 { 00:52:15.160 "method": "bdev_nvme_attach_controller", 00:52:15.160 "params": { 00:52:15.160 "name": "TLSTEST", 00:52:15.160 "trtype": "tcp", 00:52:15.160 "traddr": "10.0.0.3", 00:52:15.160 "adrfam": "ipv4", 00:52:15.160 "trsvcid": "4420", 00:52:15.160 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:52:15.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:52:15.160 "prchk_reftag": false, 00:52:15.160 "prchk_guard": false, 00:52:15.160 "hdgst": false, 00:52:15.160 "ddgst": false, 00:52:15.160 "psk": "key0", 00:52:15.160 "allow_unrecognized_csi": false 00:52:15.160 } 00:52:15.160 } 00:52:15.160 Got JSON-RPC error response 00:52:15.160 GoRPCClient: error on JSON-RPC call 00:52:15.160 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83785 00:52:15.160 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83785 ']' 00:52:15.160 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83785 00:52:15.160 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:15.160 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:15.160 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83785 00:52:15.160 killing process with pid 83785 00:52:15.160 Received shutdown signal, test time was about 10.000000 seconds 00:52:15.160 00:52:15.160 Latency(us) 00:52:15.160 [2024-12-09T10:03:22.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:15.160 [2024-12-09T10:03:22.204Z] =================================================================================================================== 00:52:15.160 [2024-12-09T10:03:22.204Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:52:15.160 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:52:15.160 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:52:15.160 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83785' 00:52:15.160 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83785 00:52:15.160 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83785 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:52:15.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83843 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83843 /var/tmp/bdevperf.sock 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83843 ']' 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:15.421 10:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:52:15.421 [2024-12-09 10:03:22.288285] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:15.421 [2024-12-09 10:03:22.288367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83843 ] 00:52:15.421 [2024-12-09 10:03:22.419246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:15.680 [2024-12-09 10:03:22.471964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:52:16.247 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:16.247 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:16.247 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:52:16.507 [2024-12-09 10:03:23.440880] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:52:16.507 [2024-12-09 10:03:23.440922] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:52:16.507 2024/12/09 10:03:23 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:52:16.507 request: 00:52:16.507 { 00:52:16.507 "method": "keyring_file_add_key", 00:52:16.507 "params": { 00:52:16.507 "name": "key0", 00:52:16.507 "path": "" 00:52:16.507 } 00:52:16.507 } 00:52:16.507 Got JSON-RPC error response 00:52:16.507 GoRPCClient: error on JSON-RPC call 00:52:16.507 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:52:16.767 [2024-12-09 10:03:23.664617] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:52:16.767 [2024-12-09 10:03:23.664673] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:52:16.767 2024/12/09 10:03:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:52:16.767 request: 00:52:16.767 { 00:52:16.767 "method": "bdev_nvme_attach_controller", 00:52:16.767 "params": { 00:52:16.767 "name": "TLSTEST", 00:52:16.767 "trtype": "tcp", 00:52:16.767 "traddr": "10.0.0.3", 00:52:16.767 "adrfam": "ipv4", 00:52:16.767 "trsvcid": "4420", 00:52:16.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:52:16.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:52:16.767 "prchk_reftag": false, 00:52:16.767 "prchk_guard": false, 00:52:16.767 "hdgst": false, 00:52:16.767 "ddgst": false, 00:52:16.767 "psk": "key0", 00:52:16.767 "allow_unrecognized_csi": false 00:52:16.767 } 00:52:16.767 } 00:52:16.767 Got JSON-RPC error response 00:52:16.767 GoRPCClient: error on JSON-RPC call 00:52:16.767 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83843 00:52:16.767 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83843 ']' 00:52:16.767 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83843 00:52:16.767 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:16.767 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:16.767 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83843 00:52:16.767 killing process with pid 83843 00:52:16.767 Received shutdown signal, test time was about 10.000000 seconds 00:52:16.767 00:52:16.767 Latency(us) 00:52:16.767 [2024-12-09T10:03:23.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:16.767 [2024-12-09T10:03:23.811Z] =================================================================================================================== 00:52:16.767 [2024-12-09T10:03:23.811Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:52:16.767 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:52:16.767 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:52:16.767 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83843' 00:52:16.767 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83843 00:52:16.767 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83843 00:52:17.026 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:52:17.026 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:52:17.026 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:17.026 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:17.026 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:17.026 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 83157 00:52:17.026 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83157 ']' 00:52:17.026 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83157 00:52:17.026 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:17.027 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:17.027 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83157 00:52:17.027 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:17.027 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:17.027 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83157' 00:52:17.027 killing process with pid 83157 00:52:17.027 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83157 00:52:17.027 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83157 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.h8ud1MAmqT 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.h8ud1MAmqT 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83910 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83910 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83910 ']' 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:17.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:17.286 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:17.286 [2024-12-09 10:03:24.225255] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:17.286 [2024-12-09 10:03:24.225329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:17.545 [2024-12-09 10:03:24.360794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:17.545 [2024-12-09 10:03:24.414671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:17.545 [2024-12-09 10:03:24.414748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:17.545 [2024-12-09 10:03:24.414756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:17.545 [2024-12-09 10:03:24.414762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:17.545 [2024-12-09 10:03:24.414767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:17.545 [2024-12-09 10:03:24.415049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:18.113 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:18.113 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:18.113 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:52:18.113 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:18.113 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:18.113 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:18.114 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.h8ud1MAmqT 00:52:18.114 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.h8ud1MAmqT 00:52:18.114 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:52:18.373 [2024-12-09 10:03:25.370600] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:18.373 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:52:18.632 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:52:18.891 [2024-12-09 10:03:25.805859] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:52:18.891 [2024-12-09 10:03:25.806078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:18.891 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:52:19.151 malloc0 00:52:19.151 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:52:19.439 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.h8ud1MAmqT 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.h8ud1MAmqT 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.h8ud1MAmqT 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84015 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84015 /var/tmp/bdevperf.sock 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84015 ']' 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:19.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:19.708 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:19.968 [2024-12-09 10:03:26.770993] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:19.968 [2024-12-09 10:03:26.771066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84015 ] 00:52:19.968 [2024-12-09 10:03:26.916624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:19.968 [2024-12-09 10:03:26.971160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:52:20.228 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:20.228 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:20.228 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.h8ud1MAmqT 00:52:20.487 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:52:20.487 [2024-12-09 10:03:27.528948] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:52:20.746 TLSTESTn1 00:52:20.746 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:52:20.746 Running I/O for 10 seconds... 00:52:22.696 5248.00 IOPS, 20.50 MiB/s [2024-12-09T10:03:31.119Z] 5312.00 IOPS, 20.75 MiB/s [2024-12-09T10:03:32.060Z] 5376.00 IOPS, 21.00 MiB/s [2024-12-09T10:03:32.997Z] 5405.75 IOPS, 21.12 MiB/s [2024-12-09T10:03:33.940Z] 5427.20 IOPS, 21.20 MiB/s [2024-12-09T10:03:34.876Z] 5439.50 IOPS, 21.25 MiB/s [2024-12-09T10:03:35.813Z] 5376.14 IOPS, 21.00 MiB/s [2024-12-09T10:03:36.754Z] 5296.00 IOPS, 20.69 MiB/s [2024-12-09T10:03:38.151Z] 5223.89 IOPS, 20.41 MiB/s [2024-12-09T10:03:38.151Z] 5184.00 IOPS, 20.25 MiB/s 00:52:31.107 Latency(us) 00:52:31.107 [2024-12-09T10:03:38.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:31.107 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:52:31.107 Verification LBA range: start 0x0 length 0x2000 00:52:31.107 TLSTESTn1 : 10.02 5187.33 20.26 0.00 0.00 24631.32 7841.43 17857.84 00:52:31.107 [2024-12-09T10:03:38.151Z] =================================================================================================================== 00:52:31.107 [2024-12-09T10:03:38.151Z] Total : 5187.33 20.26 0.00 0.00 24631.32 7841.43 17857.84 00:52:31.107 { 00:52:31.107 "results": [ 00:52:31.107 { 00:52:31.107 "job": "TLSTESTn1", 00:52:31.107 "core_mask": "0x4", 00:52:31.107 "workload": "verify", 00:52:31.107 "status": "finished", 00:52:31.107 "verify_range": { 00:52:31.107 "start": 0, 00:52:31.107 "length": 8192 00:52:31.107 }, 00:52:31.107 "queue_depth": 128, 00:52:31.107 "io_size": 4096, 00:52:31.107 "runtime": 10.018264, 00:52:31.107 "iops": 5187.325868034622, 00:52:31.107 "mibps": 20.26299167201024, 00:52:31.107 "io_failed": 0, 00:52:31.107 "io_timeout": 0, 00:52:31.107 "avg_latency_us": 24631.32468001807, 00:52:31.107 "min_latency_us": 7841.425327510917, 00:52:31.107 "max_latency_us": 17857.844541484716 00:52:31.107 } 00:52:31.107 ], 00:52:31.107 "core_count": 1 00:52:31.107 } 00:52:31.107 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:52:31.107 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84015 00:52:31.107 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84015 ']' 00:52:31.107 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84015 00:52:31.107 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:31.107 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84015 00:52:31.108 killing process with pid 84015 00:52:31.108 Received shutdown signal, test time was about 10.000000 seconds 00:52:31.108 00:52:31.108 Latency(us) 00:52:31.108 [2024-12-09T10:03:38.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:31.108 [2024-12-09T10:03:38.152Z] =================================================================================================================== 00:52:31.108 [2024-12-09T10:03:38.152Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84015' 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84015 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84015 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.h8ud1MAmqT 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.h8ud1MAmqT 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.h8ud1MAmqT 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.h8ud1MAmqT 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.h8ud1MAmqT 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84166 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84166 /var/tmp/bdevperf.sock 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84166 ']' 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:31.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:31.108 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:31.108 [2024-12-09 10:03:38.040162] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:31.108 [2024-12-09 10:03:38.040354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84166 ] 00:52:31.367 [2024-12-09 10:03:38.197006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:31.367 [2024-12-09 10:03:38.255784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:52:32.304 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:32.304 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:32.304 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.h8ud1MAmqT 00:52:32.304 [2024-12-09 10:03:39.265089] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.h8ud1MAmqT': 0100666 00:52:32.304 [2024-12-09 10:03:39.265136] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:52:32.304 2024/12/09 10:03:39 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.h8ud1MAmqT], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:52:32.304 request: 00:52:32.304 { 00:52:32.304 "method": "keyring_file_add_key", 00:52:32.304 "params": { 00:52:32.304 "name": "key0", 00:52:32.304 "path": "/tmp/tmp.h8ud1MAmqT" 00:52:32.304 } 00:52:32.304 } 00:52:32.304 Got JSON-RPC error response 00:52:32.304 GoRPCClient: error on JSON-RPC call 00:52:32.304 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:52:32.564 [2024-12-09 10:03:39.520749] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:52:32.564 [2024-12-09 10:03:39.520819] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:52:32.564 2024/12/09 10:03:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:52:32.564 request: 00:52:32.564 { 00:52:32.564 "method": "bdev_nvme_attach_controller", 00:52:32.564 "params": { 00:52:32.564 "name": "TLSTEST", 00:52:32.564 "trtype": "tcp", 00:52:32.564 "traddr": "10.0.0.3", 00:52:32.564 "adrfam": "ipv4", 00:52:32.564 "trsvcid": "4420", 00:52:32.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:52:32.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:52:32.564 "prchk_reftag": false, 00:52:32.564 "prchk_guard": false, 00:52:32.564 "hdgst": false, 00:52:32.564 "ddgst": false, 00:52:32.564 "psk": "key0", 00:52:32.564 "allow_unrecognized_csi": false 00:52:32.564 } 00:52:32.564 } 00:52:32.564 Got JSON-RPC error response 00:52:32.564 GoRPCClient: error on JSON-RPC call 00:52:32.564 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84166 00:52:32.564 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84166 ']' 00:52:32.564 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84166 00:52:32.564 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:32.564 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:32.564 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84166 00:52:32.564 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:52:32.564 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:52:32.564 killing process with pid 84166 00:52:32.564 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84166' 00:52:32.564 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84166 00:52:32.564 Received shutdown signal, test time was about 10.000000 seconds 00:52:32.564 00:52:32.564 Latency(us) 00:52:32.564 [2024-12-09T10:03:39.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:32.564 [2024-12-09T10:03:39.608Z] =================================================================================================================== 00:52:32.564 [2024-12-09T10:03:39.608Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:52:32.564 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84166 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83910 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83910 ']' 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83910 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83910 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:32.823 killing process with pid 83910 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83910' 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83910 00:52:32.823 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83910 00:52:33.082 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:52:33.082 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:52:33.082 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:33.082 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:33.082 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84228 00:52:33.082 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:52:33.082 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84228 00:52:33.082 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84228 ']' 00:52:33.082 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:33.082 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:33.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:33.082 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:33.082 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:33.082 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:33.082 [2024-12-09 10:03:40.059322] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:33.082 [2024-12-09 10:03:40.059401] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:33.341 [2024-12-09 10:03:40.191728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:33.341 [2024-12-09 10:03:40.247899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:33.341 [2024-12-09 10:03:40.248146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:33.341 [2024-12-09 10:03:40.248216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:33.341 [2024-12-09 10:03:40.248282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:33.341 [2024-12-09 10:03:40.248351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:33.341 [2024-12-09 10:03:40.248708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.h8ud1MAmqT 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.h8ud1MAmqT 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.h8ud1MAmqT 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.h8ud1MAmqT 00:52:34.279 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:52:34.279 [2024-12-09 10:03:41.305427] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:34.537 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:52:34.538 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:52:34.818 [2024-12-09 10:03:41.804595] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:52:34.818 [2024-12-09 10:03:41.804822] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:34.818 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:52:35.077 malloc0 00:52:35.077 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:52:35.336 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.h8ud1MAmqT 00:52:35.595 [2024-12-09 10:03:42.564690] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.h8ud1MAmqT': 0100666 00:52:35.595 [2024-12-09 10:03:42.564740] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:52:35.595 2024/12/09 10:03:42 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.h8ud1MAmqT], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:52:35.595 request: 00:52:35.595 { 00:52:35.595 "method": "keyring_file_add_key", 00:52:35.595 "params": { 00:52:35.595 "name": "key0", 00:52:35.595 "path": "/tmp/tmp.h8ud1MAmqT" 00:52:35.595 } 00:52:35.595 } 00:52:35.595 Got JSON-RPC error response 00:52:35.595 GoRPCClient: error on JSON-RPC call 00:52:35.596 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:52:35.855 [2024-12-09 10:03:42.800322] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:52:35.855 [2024-12-09 10:03:42.800401] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:52:35.855 2024/12/09 10:03:42 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:52:35.855 request: 00:52:35.855 { 00:52:35.855 "method": "nvmf_subsystem_add_host", 00:52:35.855 "params": { 00:52:35.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:52:35.855 "host": "nqn.2016-06.io.spdk:host1", 00:52:35.855 "psk": "key0" 00:52:35.855 } 00:52:35.855 } 00:52:35.855 Got JSON-RPC error response 00:52:35.855 GoRPCClient: error on JSON-RPC call 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 84228 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84228 ']' 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84228 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84228 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:35.855 killing process with pid 84228 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84228' 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84228 00:52:35.855 10:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84228 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.h8ud1MAmqT 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84348 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84348 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84348 ']' 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:36.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:36.114 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:36.114 [2024-12-09 10:03:43.133056] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:36.114 [2024-12-09 10:03:43.133134] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:36.373 [2024-12-09 10:03:43.282686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:36.373 [2024-12-09 10:03:43.339450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:36.373 [2024-12-09 10:03:43.339518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:36.373 [2024-12-09 10:03:43.339528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:36.373 [2024-12-09 10:03:43.339536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:36.373 [2024-12-09 10:03:43.339542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:36.373 [2024-12-09 10:03:43.339841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:37.311 10:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:37.311 10:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:37.311 10:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:52:37.311 10:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:37.311 10:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:37.311 10:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:37.311 10:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.h8ud1MAmqT 00:52:37.311 10:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.h8ud1MAmqT 00:52:37.311 10:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:52:37.311 [2024-12-09 10:03:44.336465] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:37.570 10:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:52:37.570 10:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:52:37.828 [2024-12-09 10:03:44.779695] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:52:37.828 [2024-12-09 10:03:44.779918] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:37.828 10:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:52:38.087 malloc0 00:52:38.087 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:52:38.346 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.h8ud1MAmqT 00:52:38.605 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:52:38.863 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84462 00:52:38.863 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:52:38.863 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:38.863 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84462 /var/tmp/bdevperf.sock 00:52:38.863 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84462 ']' 00:52:38.863 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:38.863 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:38.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:38.863 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:38.864 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:38.864 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:38.864 [2024-12-09 10:03:45.791140] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:38.864 [2024-12-09 10:03:45.791221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84462 ] 00:52:39.123 [2024-12-09 10:03:45.937883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:39.123 [2024-12-09 10:03:45.993309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:52:39.713 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:39.713 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:39.713 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.h8ud1MAmqT 00:52:39.972 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:52:40.232 [2024-12-09 10:03:47.106932] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:52:40.232 TLSTESTn1 00:52:40.232 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:52:40.492 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:52:40.492 "subsystems": [ 00:52:40.492 { 00:52:40.492 "subsystem": "keyring", 00:52:40.492 "config": [ 00:52:40.492 { 00:52:40.492 "method": "keyring_file_add_key", 00:52:40.492 "params": { 00:52:40.492 "name": "key0", 00:52:40.492 "path": "/tmp/tmp.h8ud1MAmqT" 00:52:40.492 } 00:52:40.492 } 00:52:40.492 ] 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "subsystem": "iobuf", 00:52:40.492 "config": [ 00:52:40.492 { 00:52:40.492 "method": "iobuf_set_options", 00:52:40.492 "params": { 00:52:40.492 "enable_numa": false, 00:52:40.492 "large_bufsize": 135168, 00:52:40.492 "large_pool_count": 1024, 00:52:40.492 "small_bufsize": 8192, 00:52:40.492 "small_pool_count": 8192 00:52:40.492 } 00:52:40.492 } 00:52:40.492 ] 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "subsystem": "sock", 00:52:40.492 "config": [ 00:52:40.492 { 00:52:40.492 "method": "sock_set_default_impl", 00:52:40.492 "params": { 00:52:40.492 "impl_name": "posix" 00:52:40.492 } 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "method": "sock_impl_set_options", 00:52:40.492 "params": { 00:52:40.492 "enable_ktls": false, 00:52:40.492 "enable_placement_id": 0, 00:52:40.492 "enable_quickack": false, 00:52:40.492 "enable_recv_pipe": true, 00:52:40.492 "enable_zerocopy_send_client": false, 00:52:40.492 "enable_zerocopy_send_server": true, 00:52:40.492 "impl_name": "ssl", 00:52:40.492 "recv_buf_size": 4096, 00:52:40.492 "send_buf_size": 4096, 00:52:40.492 "tls_version": 0, 00:52:40.492 "zerocopy_threshold": 0 00:52:40.492 } 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "method": "sock_impl_set_options", 00:52:40.492 "params": { 00:52:40.492 "enable_ktls": false, 00:52:40.492 "enable_placement_id": 0, 00:52:40.492 "enable_quickack": false, 00:52:40.492 "enable_recv_pipe": true, 00:52:40.492 "enable_zerocopy_send_client": false, 00:52:40.492 "enable_zerocopy_send_server": true, 00:52:40.492 "impl_name": "posix", 00:52:40.492 "recv_buf_size": 2097152, 00:52:40.492 "send_buf_size": 2097152, 00:52:40.492 "tls_version": 0, 00:52:40.492 "zerocopy_threshold": 0 00:52:40.492 } 00:52:40.492 } 00:52:40.492 ] 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "subsystem": "vmd", 00:52:40.492 "config": [] 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "subsystem": "accel", 00:52:40.492 "config": [ 00:52:40.492 { 00:52:40.492 "method": "accel_set_options", 00:52:40.492 "params": { 00:52:40.492 "buf_count": 2048, 00:52:40.492 "large_cache_size": 16, 00:52:40.492 "sequence_count": 2048, 00:52:40.492 "small_cache_size": 128, 00:52:40.492 "task_count": 2048 00:52:40.492 } 00:52:40.492 } 00:52:40.492 ] 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "subsystem": "bdev", 00:52:40.492 "config": [ 00:52:40.492 { 00:52:40.492 "method": "bdev_set_options", 00:52:40.492 "params": { 00:52:40.492 "bdev_auto_examine": true, 00:52:40.492 "bdev_io_cache_size": 256, 00:52:40.492 "bdev_io_pool_size": 65535, 00:52:40.492 "iobuf_large_cache_size": 16, 00:52:40.492 "iobuf_small_cache_size": 128 00:52:40.492 } 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "method": "bdev_raid_set_options", 00:52:40.492 "params": { 00:52:40.492 "process_max_bandwidth_mb_sec": 0, 00:52:40.492 "process_window_size_kb": 1024 00:52:40.492 } 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "method": "bdev_iscsi_set_options", 00:52:40.492 "params": { 00:52:40.492 "timeout_sec": 30 00:52:40.492 } 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "method": "bdev_nvme_set_options", 00:52:40.492 "params": { 00:52:40.492 "action_on_timeout": "none", 00:52:40.492 "allow_accel_sequence": false, 00:52:40.492 "arbitration_burst": 0, 00:52:40.492 "bdev_retry_count": 3, 00:52:40.492 "ctrlr_loss_timeout_sec": 0, 00:52:40.492 "delay_cmd_submit": true, 00:52:40.492 "dhchap_dhgroups": [ 00:52:40.492 "null", 00:52:40.492 "ffdhe2048", 00:52:40.492 "ffdhe3072", 00:52:40.492 "ffdhe4096", 00:52:40.492 "ffdhe6144", 00:52:40.492 "ffdhe8192" 00:52:40.492 ], 00:52:40.492 "dhchap_digests": [ 00:52:40.492 "sha256", 00:52:40.492 "sha384", 00:52:40.492 "sha512" 00:52:40.492 ], 00:52:40.492 "disable_auto_failback": false, 00:52:40.492 "fast_io_fail_timeout_sec": 0, 00:52:40.492 "generate_uuids": false, 00:52:40.492 "high_priority_weight": 0, 00:52:40.492 "io_path_stat": false, 00:52:40.492 "io_queue_requests": 0, 00:52:40.492 "keep_alive_timeout_ms": 10000, 00:52:40.492 "low_priority_weight": 0, 00:52:40.492 "medium_priority_weight": 0, 00:52:40.492 "nvme_adminq_poll_period_us": 10000, 00:52:40.492 "nvme_error_stat": false, 00:52:40.492 "nvme_ioq_poll_period_us": 0, 00:52:40.492 "rdma_cm_event_timeout_ms": 0, 00:52:40.492 "rdma_max_cq_size": 0, 00:52:40.492 "rdma_srq_size": 0, 00:52:40.492 "reconnect_delay_sec": 0, 00:52:40.492 "timeout_admin_us": 0, 00:52:40.492 "timeout_us": 0, 00:52:40.492 "transport_ack_timeout": 0, 00:52:40.492 "transport_retry_count": 4, 00:52:40.492 "transport_tos": 0 00:52:40.492 } 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "method": "bdev_nvme_set_hotplug", 00:52:40.492 "params": { 00:52:40.492 "enable": false, 00:52:40.492 "period_us": 100000 00:52:40.492 } 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "method": "bdev_malloc_create", 00:52:40.492 "params": { 00:52:40.492 "block_size": 4096, 00:52:40.492 "dif_is_head_of_md": false, 00:52:40.492 "dif_pi_format": 0, 00:52:40.492 "dif_type": 0, 00:52:40.492 "md_size": 0, 00:52:40.492 "name": "malloc0", 00:52:40.492 "num_blocks": 8192, 00:52:40.492 "optimal_io_boundary": 0, 00:52:40.492 "physical_block_size": 4096, 00:52:40.492 "uuid": "0efb65d3-cc20-4847-92ef-e60fdb8d1d05" 00:52:40.492 } 00:52:40.492 }, 00:52:40.492 { 00:52:40.492 "method": "bdev_wait_for_examine" 00:52:40.492 } 00:52:40.492 ] 00:52:40.492 }, 00:52:40.493 { 00:52:40.493 "subsystem": "nbd", 00:52:40.493 "config": [] 00:52:40.493 }, 00:52:40.493 { 00:52:40.493 "subsystem": "scheduler", 00:52:40.493 "config": [ 00:52:40.493 { 00:52:40.493 "method": "framework_set_scheduler", 00:52:40.493 "params": { 00:52:40.493 "name": "static" 00:52:40.493 } 00:52:40.493 } 00:52:40.493 ] 00:52:40.493 }, 00:52:40.493 { 00:52:40.493 "subsystem": "nvmf", 00:52:40.493 "config": [ 00:52:40.493 { 00:52:40.493 "method": "nvmf_set_config", 00:52:40.493 "params": { 00:52:40.493 "admin_cmd_passthru": { 00:52:40.493 "identify_ctrlr": false 00:52:40.493 }, 00:52:40.493 "dhchap_dhgroups": [ 00:52:40.493 "null", 00:52:40.493 "ffdhe2048", 00:52:40.493 "ffdhe3072", 00:52:40.493 "ffdhe4096", 00:52:40.493 "ffdhe6144", 00:52:40.493 "ffdhe8192" 00:52:40.493 ], 00:52:40.493 "dhchap_digests": [ 00:52:40.493 "sha256", 00:52:40.493 "sha384", 00:52:40.493 "sha512" 00:52:40.493 ], 00:52:40.493 "discovery_filter": "match_any" 00:52:40.493 } 00:52:40.493 }, 00:52:40.493 { 00:52:40.493 "method": "nvmf_set_max_subsystems", 00:52:40.493 "params": { 00:52:40.493 "max_subsystems": 1024 00:52:40.493 } 00:52:40.493 }, 00:52:40.493 { 00:52:40.493 "method": "nvmf_set_crdt", 00:52:40.493 "params": { 00:52:40.493 "crdt1": 0, 00:52:40.493 "crdt2": 0, 00:52:40.493 "crdt3": 0 00:52:40.493 } 00:52:40.493 }, 00:52:40.493 { 00:52:40.493 "method": "nvmf_create_transport", 00:52:40.493 "params": { 00:52:40.493 "abort_timeout_sec": 1, 00:52:40.493 "ack_timeout": 0, 00:52:40.493 "buf_cache_size": 4294967295, 00:52:40.493 "c2h_success": false, 00:52:40.493 "data_wr_pool_size": 0, 00:52:40.493 "dif_insert_or_strip": false, 00:52:40.493 "in_capsule_data_size": 4096, 00:52:40.493 "io_unit_size": 131072, 00:52:40.493 "max_aq_depth": 128, 00:52:40.493 "max_io_qpairs_per_ctrlr": 127, 00:52:40.493 "max_io_size": 131072, 00:52:40.493 "max_queue_depth": 128, 00:52:40.493 "num_shared_buffers": 511, 00:52:40.493 "sock_priority": 0, 00:52:40.493 "trtype": "TCP", 00:52:40.493 "zcopy": false 00:52:40.493 } 00:52:40.493 }, 00:52:40.493 { 00:52:40.493 "method": "nvmf_create_subsystem", 00:52:40.493 "params": { 00:52:40.493 "allow_any_host": false, 00:52:40.493 "ana_reporting": false, 00:52:40.493 "max_cntlid": 65519, 00:52:40.493 "max_namespaces": 10, 00:52:40.493 "min_cntlid": 1, 00:52:40.493 "model_number": "SPDK bdev Controller", 00:52:40.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:52:40.493 "serial_number": "SPDK00000000000001" 00:52:40.493 } 00:52:40.493 }, 00:52:40.493 { 00:52:40.493 "method": "nvmf_subsystem_add_host", 00:52:40.493 "params": { 00:52:40.493 "host": "nqn.2016-06.io.spdk:host1", 00:52:40.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:52:40.493 "psk": "key0" 00:52:40.493 } 00:52:40.493 }, 00:52:40.493 { 00:52:40.493 "method": "nvmf_subsystem_add_ns", 00:52:40.493 "params": { 00:52:40.493 "namespace": { 00:52:40.493 "bdev_name": "malloc0", 00:52:40.493 "nguid": "0EFB65D3CC20484792EFE60FDB8D1D05", 00:52:40.493 "no_auto_visible": false, 00:52:40.493 "nsid": 1, 00:52:40.493 "uuid": "0efb65d3-cc20-4847-92ef-e60fdb8d1d05" 00:52:40.493 }, 00:52:40.493 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:52:40.493 } 00:52:40.493 }, 00:52:40.493 { 00:52:40.493 "method": "nvmf_subsystem_add_listener", 00:52:40.493 "params": { 00:52:40.493 "listen_address": { 00:52:40.493 "adrfam": "IPv4", 00:52:40.493 "traddr": "10.0.0.3", 00:52:40.493 "trsvcid": "4420", 00:52:40.493 "trtype": "TCP" 00:52:40.493 }, 00:52:40.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:52:40.493 "secure_channel": true 00:52:40.493 } 00:52:40.493 } 00:52:40.493 ] 00:52:40.493 } 00:52:40.493 ] 00:52:40.493 }' 00:52:40.493 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:52:41.063 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:52:41.063 "subsystems": [ 00:52:41.063 { 00:52:41.063 "subsystem": "keyring", 00:52:41.063 "config": [ 00:52:41.063 { 00:52:41.063 "method": "keyring_file_add_key", 00:52:41.063 "params": { 00:52:41.063 "name": "key0", 00:52:41.063 "path": "/tmp/tmp.h8ud1MAmqT" 00:52:41.063 } 00:52:41.063 } 00:52:41.063 ] 00:52:41.063 }, 00:52:41.063 { 00:52:41.063 "subsystem": "iobuf", 00:52:41.063 "config": [ 00:52:41.063 { 00:52:41.063 "method": "iobuf_set_options", 00:52:41.063 "params": { 00:52:41.063 "enable_numa": false, 00:52:41.063 "large_bufsize": 135168, 00:52:41.063 "large_pool_count": 1024, 00:52:41.063 "small_bufsize": 8192, 00:52:41.063 "small_pool_count": 8192 00:52:41.063 } 00:52:41.063 } 00:52:41.063 ] 00:52:41.063 }, 00:52:41.063 { 00:52:41.063 "subsystem": "sock", 00:52:41.063 "config": [ 00:52:41.063 { 00:52:41.063 "method": "sock_set_default_impl", 00:52:41.063 "params": { 00:52:41.063 "impl_name": "posix" 00:52:41.063 } 00:52:41.063 }, 00:52:41.063 { 00:52:41.063 "method": "sock_impl_set_options", 00:52:41.063 "params": { 00:52:41.063 "enable_ktls": false, 00:52:41.063 "enable_placement_id": 0, 00:52:41.063 "enable_quickack": false, 00:52:41.063 "enable_recv_pipe": true, 00:52:41.063 "enable_zerocopy_send_client": false, 00:52:41.063 "enable_zerocopy_send_server": true, 00:52:41.063 "impl_name": "ssl", 00:52:41.063 "recv_buf_size": 4096, 00:52:41.063 "send_buf_size": 4096, 00:52:41.063 "tls_version": 0, 00:52:41.063 "zerocopy_threshold": 0 00:52:41.063 } 00:52:41.063 }, 00:52:41.063 { 00:52:41.063 "method": "sock_impl_set_options", 00:52:41.063 "params": { 00:52:41.063 "enable_ktls": false, 00:52:41.063 "enable_placement_id": 0, 00:52:41.063 "enable_quickack": false, 00:52:41.063 "enable_recv_pipe": true, 00:52:41.063 "enable_zerocopy_send_client": false, 00:52:41.063 "enable_zerocopy_send_server": true, 00:52:41.063 "impl_name": "posix", 00:52:41.063 "recv_buf_size": 2097152, 00:52:41.063 "send_buf_size": 2097152, 00:52:41.063 "tls_version": 0, 00:52:41.063 "zerocopy_threshold": 0 00:52:41.063 } 00:52:41.063 } 00:52:41.063 ] 00:52:41.063 }, 00:52:41.063 { 00:52:41.063 "subsystem": "vmd", 00:52:41.063 "config": [] 00:52:41.063 }, 00:52:41.063 { 00:52:41.063 "subsystem": "accel", 00:52:41.063 "config": [ 00:52:41.063 { 00:52:41.063 "method": "accel_set_options", 00:52:41.063 "params": { 00:52:41.063 "buf_count": 2048, 00:52:41.063 "large_cache_size": 16, 00:52:41.063 "sequence_count": 2048, 00:52:41.063 "small_cache_size": 128, 00:52:41.063 "task_count": 2048 00:52:41.063 } 00:52:41.063 } 00:52:41.063 ] 00:52:41.063 }, 00:52:41.063 { 00:52:41.063 "subsystem": "bdev", 00:52:41.063 "config": [ 00:52:41.063 { 00:52:41.063 "method": "bdev_set_options", 00:52:41.063 "params": { 00:52:41.063 "bdev_auto_examine": true, 00:52:41.063 "bdev_io_cache_size": 256, 00:52:41.063 "bdev_io_pool_size": 65535, 00:52:41.063 "iobuf_large_cache_size": 16, 00:52:41.063 "iobuf_small_cache_size": 128 00:52:41.063 } 00:52:41.063 }, 00:52:41.063 { 00:52:41.063 "method": "bdev_raid_set_options", 00:52:41.063 "params": { 00:52:41.063 "process_max_bandwidth_mb_sec": 0, 00:52:41.063 "process_window_size_kb": 1024 00:52:41.063 } 00:52:41.063 }, 00:52:41.063 { 00:52:41.063 "method": "bdev_iscsi_set_options", 00:52:41.063 "params": { 00:52:41.063 "timeout_sec": 30 00:52:41.063 } 00:52:41.063 }, 00:52:41.063 { 00:52:41.063 "method": "bdev_nvme_set_options", 00:52:41.063 "params": { 00:52:41.063 "action_on_timeout": "none", 00:52:41.063 "allow_accel_sequence": false, 00:52:41.063 "arbitration_burst": 0, 00:52:41.063 "bdev_retry_count": 3, 00:52:41.063 "ctrlr_loss_timeout_sec": 0, 00:52:41.063 "delay_cmd_submit": true, 00:52:41.063 "dhchap_dhgroups": [ 00:52:41.063 "null", 00:52:41.063 "ffdhe2048", 00:52:41.063 "ffdhe3072", 00:52:41.063 "ffdhe4096", 00:52:41.063 "ffdhe6144", 00:52:41.063 "ffdhe8192" 00:52:41.063 ], 00:52:41.063 "dhchap_digests": [ 00:52:41.063 "sha256", 00:52:41.063 "sha384", 00:52:41.064 "sha512" 00:52:41.064 ], 00:52:41.064 "disable_auto_failback": false, 00:52:41.064 "fast_io_fail_timeout_sec": 0, 00:52:41.064 "generate_uuids": false, 00:52:41.064 "high_priority_weight": 0, 00:52:41.064 "io_path_stat": false, 00:52:41.064 "io_queue_requests": 512, 00:52:41.064 "keep_alive_timeout_ms": 10000, 00:52:41.064 "low_priority_weight": 0, 00:52:41.064 "medium_priority_weight": 0, 00:52:41.064 "nvme_adminq_poll_period_us": 10000, 00:52:41.064 "nvme_error_stat": false, 00:52:41.064 "nvme_ioq_poll_period_us": 0, 00:52:41.064 "rdma_cm_event_timeout_ms": 0, 00:52:41.064 "rdma_max_cq_size": 0, 00:52:41.064 "rdma_srq_size": 0, 00:52:41.064 "reconnect_delay_sec": 0, 00:52:41.064 "timeout_admin_us": 0, 00:52:41.064 "timeout_us": 0, 00:52:41.064 "transport_ack_timeout": 0, 00:52:41.064 "transport_retry_count": 4, 00:52:41.064 "transport_tos": 0 00:52:41.064 } 00:52:41.064 }, 00:52:41.064 { 00:52:41.064 "method": "bdev_nvme_attach_controller", 00:52:41.064 "params": { 00:52:41.064 "adrfam": "IPv4", 00:52:41.064 "ctrlr_loss_timeout_sec": 0, 00:52:41.064 "ddgst": false, 00:52:41.064 "fast_io_fail_timeout_sec": 0, 00:52:41.064 "hdgst": false, 00:52:41.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:52:41.064 "multipath": "multipath", 00:52:41.064 "name": "TLSTEST", 00:52:41.064 "prchk_guard": false, 00:52:41.064 "prchk_reftag": false, 00:52:41.064 "psk": "key0", 00:52:41.064 "reconnect_delay_sec": 0, 00:52:41.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:52:41.064 "traddr": "10.0.0.3", 00:52:41.064 "trsvcid": "4420", 00:52:41.064 "trtype": "TCP" 00:52:41.064 } 00:52:41.064 }, 00:52:41.064 { 00:52:41.064 "method": "bdev_nvme_set_hotplug", 00:52:41.064 "params": { 00:52:41.064 "enable": false, 00:52:41.064 "period_us": 100000 00:52:41.064 } 00:52:41.064 }, 00:52:41.064 { 00:52:41.064 "method": "bdev_wait_for_examine" 00:52:41.064 } 00:52:41.064 ] 00:52:41.064 }, 00:52:41.064 { 00:52:41.064 "subsystem": "nbd", 00:52:41.064 "config": [] 00:52:41.064 } 00:52:41.064 ] 00:52:41.064 }' 00:52:41.064 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84462 00:52:41.064 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84462 ']' 00:52:41.064 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84462 00:52:41.064 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:41.064 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:41.064 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84462 00:52:41.064 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:52:41.064 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:52:41.064 killing process with pid 84462 00:52:41.064 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84462' 00:52:41.064 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84462 00:52:41.064 Received shutdown signal, test time was about 10.000000 seconds 00:52:41.064 00:52:41.064 Latency(us) 00:52:41.064 [2024-12-09T10:03:48.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:41.064 [2024-12-09T10:03:48.108Z] =================================================================================================================== 00:52:41.064 [2024-12-09T10:03:48.108Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:52:41.064 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84462 00:52:41.064 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 84348 00:52:41.064 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84348 ']' 00:52:41.064 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84348 00:52:41.064 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:41.064 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:41.064 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84348 00:52:41.064 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:41.064 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:41.064 killing process with pid 84348 00:52:41.064 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84348' 00:52:41.064 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84348 00:52:41.064 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84348 00:52:41.324 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:52:41.324 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:52:41.324 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:41.324 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:41.324 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:52:41.324 "subsystems": [ 00:52:41.324 { 00:52:41.324 "subsystem": "keyring", 00:52:41.324 "config": [ 00:52:41.324 { 00:52:41.324 "method": "keyring_file_add_key", 00:52:41.324 "params": { 00:52:41.324 "name": "key0", 00:52:41.324 "path": "/tmp/tmp.h8ud1MAmqT" 00:52:41.324 } 00:52:41.324 } 00:52:41.324 ] 00:52:41.324 }, 00:52:41.324 { 00:52:41.324 "subsystem": "iobuf", 00:52:41.324 "config": [ 00:52:41.324 { 00:52:41.324 "method": "iobuf_set_options", 00:52:41.324 "params": { 00:52:41.324 "enable_numa": false, 00:52:41.324 "large_bufsize": 135168, 00:52:41.324 "large_pool_count": 1024, 00:52:41.324 "small_bufsize": 8192, 00:52:41.324 "small_pool_count": 8192 00:52:41.324 } 00:52:41.324 } 00:52:41.324 ] 00:52:41.324 }, 00:52:41.324 { 00:52:41.324 "subsystem": "sock", 00:52:41.324 "config": [ 00:52:41.324 { 00:52:41.324 "method": "sock_set_default_impl", 00:52:41.324 "params": { 00:52:41.324 "impl_name": "posix" 00:52:41.324 } 00:52:41.324 }, 00:52:41.324 { 00:52:41.324 "method": "sock_impl_set_options", 00:52:41.324 "params": { 00:52:41.324 "enable_ktls": false, 00:52:41.324 "enable_placement_id": 0, 00:52:41.324 "enable_quickack": false, 00:52:41.324 "enable_recv_pipe": true, 00:52:41.324 "enable_zerocopy_send_client": false, 00:52:41.324 "enable_zerocopy_send_server": true, 00:52:41.324 "impl_name": "ssl", 00:52:41.324 "recv_buf_size": 4096, 00:52:41.324 "send_buf_size": 4096, 00:52:41.324 "tls_version": 0, 00:52:41.324 "zerocopy_threshold": 0 00:52:41.324 } 00:52:41.324 }, 00:52:41.324 { 00:52:41.324 "method": "sock_impl_set_options", 00:52:41.324 "params": { 00:52:41.324 "enable_ktls": false, 00:52:41.324 "enable_placement_id": 0, 00:52:41.324 "enable_quickack": false, 00:52:41.324 "enable_recv_pipe": true, 00:52:41.324 "enable_zerocopy_send_client": false, 00:52:41.324 "enable_zerocopy_send_server": true, 00:52:41.324 "impl_name": "posix", 00:52:41.324 "recv_buf_size": 2097152, 00:52:41.325 "send_buf_size": 2097152, 00:52:41.325 "tls_version": 0, 00:52:41.325 "zerocopy_threshold": 0 00:52:41.325 } 00:52:41.325 } 00:52:41.325 ] 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "subsystem": "vmd", 00:52:41.325 "config": [] 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "subsystem": "accel", 00:52:41.325 "config": [ 00:52:41.325 { 00:52:41.325 "method": "accel_set_options", 00:52:41.325 "params": { 00:52:41.325 "buf_count": 2048, 00:52:41.325 "large_cache_size": 16, 00:52:41.325 "sequence_count": 2048, 00:52:41.325 "small_cache_size": 128, 00:52:41.325 "task_count": 2048 00:52:41.325 } 00:52:41.325 } 00:52:41.325 ] 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "subsystem": "bdev", 00:52:41.325 "config": [ 00:52:41.325 { 00:52:41.325 "method": "bdev_set_options", 00:52:41.325 "params": { 00:52:41.325 "bdev_auto_examine": true, 00:52:41.325 "bdev_io_cache_size": 256, 00:52:41.325 "bdev_io_pool_size": 65535, 00:52:41.325 "iobuf_large_cache_size": 16, 00:52:41.325 "iobuf_small_cache_size": 128 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "bdev_raid_set_options", 00:52:41.325 "params": { 00:52:41.325 "process_max_bandwidth_mb_sec": 0, 00:52:41.325 "process_window_size_kb": 1024 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "bdev_iscsi_set_options", 00:52:41.325 "params": { 00:52:41.325 "timeout_sec": 30 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "bdev_nvme_set_options", 00:52:41.325 "params": { 00:52:41.325 "action_on_timeout": "none", 00:52:41.325 "allow_accel_sequence": false, 00:52:41.325 "arbitration_burst": 0, 00:52:41.325 "bdev_retry_count": 3, 00:52:41.325 "ctrlr_loss_timeout_sec": 0, 00:52:41.325 "delay_cmd_submit": true, 00:52:41.325 "dhchap_dhgroups": [ 00:52:41.325 "null", 00:52:41.325 "ffdhe2048", 00:52:41.325 "ffdhe3072", 00:52:41.325 "ffdhe4096", 00:52:41.325 "ffdhe6144", 00:52:41.325 "ffdhe8192" 00:52:41.325 ], 00:52:41.325 "dhchap_digests": [ 00:52:41.325 "sha256", 00:52:41.325 "sha384", 00:52:41.325 "sha512" 00:52:41.325 ], 00:52:41.325 "disable_auto_failback": false, 00:52:41.325 "fast_io_fail_timeout_sec": 0, 00:52:41.325 "generate_uuids": false, 00:52:41.325 "high_priority_weight": 0, 00:52:41.325 "io_path_stat": false, 00:52:41.325 "io_queue_requests": 0, 00:52:41.325 "keep_alive_timeout_ms": 10000, 00:52:41.325 "low_priority_weight": 0, 00:52:41.325 "medium_priority_weight": 0, 00:52:41.325 "nvme_adminq_poll_period_us": 10000, 00:52:41.325 "nvme_error_stat": false, 00:52:41.325 "nvme_ioq_poll_period_us": 0, 00:52:41.325 "rdma_cm_event_timeout_ms": 0, 00:52:41.325 "rdma_max_cq_size": 0, 00:52:41.325 "rdma_srq_size": 0, 00:52:41.325 "reconnect_delay_sec": 0, 00:52:41.325 "timeout_admin_us": 0, 00:52:41.325 "timeout_us": 0, 00:52:41.325 "transport_ack_timeout": 0, 00:52:41.325 "transport_retry_count": 4, 00:52:41.325 "transport_tos": 0 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "bdev_nvme_set_hotplug", 00:52:41.325 "params": { 00:52:41.325 "enable": false, 00:52:41.325 "period_us": 100000 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "bdev_malloc_create", 00:52:41.325 "params": { 00:52:41.325 "block_size": 4096, 00:52:41.325 "dif_is_head_of_md": false, 00:52:41.325 "dif_pi_format": 0, 00:52:41.325 "dif_type": 0, 00:52:41.325 "md_size": 0, 00:52:41.325 "name": "malloc0", 00:52:41.325 "num_blocks": 8192, 00:52:41.325 "optimal_io_boundary": 0, 00:52:41.325 "physical_block_size": 4096, 00:52:41.325 "uuid": "0efb65d3-cc20-4847-92ef-e60fdb8d1d05" 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "bdev_wait_for_examine" 00:52:41.325 } 00:52:41.325 ] 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "subsystem": "nbd", 00:52:41.325 "config": [] 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "subsystem": "scheduler", 00:52:41.325 "config": [ 00:52:41.325 { 00:52:41.325 "method": "framework_set_scheduler", 00:52:41.325 "params": { 00:52:41.325 "name": "static" 00:52:41.325 } 00:52:41.325 } 00:52:41.325 ] 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "subsystem": "nvmf", 00:52:41.325 "config": [ 00:52:41.325 { 00:52:41.325 "method": "nvmf_set_config", 00:52:41.325 "params": { 00:52:41.325 "admin_cmd_passthru": { 00:52:41.325 "identify_ctrlr": false 00:52:41.325 }, 00:52:41.325 "dhchap_dhgroups": [ 00:52:41.325 "null", 00:52:41.325 "ffdhe2048", 00:52:41.325 "ffdhe3072", 00:52:41.325 "ffdhe4096", 00:52:41.325 "ffdhe6144", 00:52:41.325 "ffdhe8192" 00:52:41.325 ], 00:52:41.325 "dhchap_digests": [ 00:52:41.325 "sha256", 00:52:41.325 "sha384", 00:52:41.325 "sha512" 00:52:41.325 ], 00:52:41.325 "discovery_filter": "match_any" 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "nvmf_set_max_subsystems", 00:52:41.325 "params": { 00:52:41.325 "max_subsystems": 1024 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "nvmf_set_crdt", 00:52:41.325 "params": { 00:52:41.325 "crdt1": 0, 00:52:41.325 "crdt2": 0, 00:52:41.325 "crdt3": 0 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "nvmf_create_transport", 00:52:41.325 "params": { 00:52:41.325 "abort_timeout_sec": 1, 00:52:41.325 "ack_timeout": 0, 00:52:41.325 "buf_cache_size": 4294967295, 00:52:41.325 "c2h_success": false, 00:52:41.325 "data_wr_pool_size": 0, 00:52:41.325 "dif_insert_or_strip": false, 00:52:41.325 "in_capsule_data_size": 4096, 00:52:41.325 "io_unit_size": 131072, 00:52:41.325 "max_aq_depth": 128, 00:52:41.325 "max_io_qpairs_per_ctrlr": 127, 00:52:41.325 "max_io_size": 131072, 00:52:41.325 "max_queue_depth": 128, 00:52:41.325 "num_shared_buffers": 511, 00:52:41.325 "sock_priority": 0, 00:52:41.325 "trtype": "TCP", 00:52:41.325 "zcopy": false 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "nvmf_create_subsystem", 00:52:41.325 "params": { 00:52:41.325 "allow_any_host": false, 00:52:41.325 "ana_reporting": false, 00:52:41.325 "max_cntlid": 65519, 00:52:41.325 "max_namespaces": 10, 00:52:41.325 "min_cntlid": 1, 00:52:41.325 "model_number": "SPDK bdev Controller", 00:52:41.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:52:41.325 "serial_number": "SPDK00000000000001" 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "nvmf_subsystem_add_host", 00:52:41.325 "params": { 00:52:41.325 "host": "nqn.2016-06.io.spdk:host1", 00:52:41.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:52:41.325 "psk": "key0" 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "nvmf_subsystem_add_ns", 00:52:41.325 "params": { 00:52:41.325 "namespace": { 00:52:41.325 "bdev_name": "malloc0", 00:52:41.325 "nguid": "0EFB65D3CC20484792EFE60FDB8D1D05", 00:52:41.325 "no_auto_visible": false, 00:52:41.325 "nsid": 1, 00:52:41.325 "uuid": "0efb65d3-cc20-4847-92ef-e60fdb8d1d05" 00:52:41.325 }, 00:52:41.325 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:52:41.325 } 00:52:41.325 }, 00:52:41.325 { 00:52:41.325 "method": "nvmf_subsystem_add_listener", 00:52:41.325 "params": { 00:52:41.325 "listen_address": { 00:52:41.325 "adrfam": "IPv4", 00:52:41.325 "traddr": "10.0.0.3", 00:52:41.325 "trsvcid": "4420", 00:52:41.325 "trtype": "TCP" 00:52:41.325 }, 00:52:41.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:52:41.325 "secure_channel": true 00:52:41.325 } 00:52:41.325 } 00:52:41.325 ] 00:52:41.325 } 00:52:41.325 ] 00:52:41.325 }' 00:52:41.325 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84543 00:52:41.326 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:52:41.326 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84543 00:52:41.326 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84543 ']' 00:52:41.326 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:41.326 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:41.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:41.326 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:41.326 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:41.326 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:41.326 [2024-12-09 10:03:48.294768] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:41.326 [2024-12-09 10:03:48.294846] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:41.585 [2024-12-09 10:03:48.424798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:41.585 [2024-12-09 10:03:48.478530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:41.585 [2024-12-09 10:03:48.478588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:41.585 [2024-12-09 10:03:48.478596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:41.585 [2024-12-09 10:03:48.478602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:41.585 [2024-12-09 10:03:48.478607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:41.585 [2024-12-09 10:03:48.478925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:41.845 [2024-12-09 10:03:48.700178] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:41.845 [2024-12-09 10:03:48.732061] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:52:41.845 [2024-12-09 10:03:48.732304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:42.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84587 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84587 /var/tmp/bdevperf.sock 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84587 ']' 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:42.416 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:52:42.416 "subsystems": [ 00:52:42.416 { 00:52:42.416 "subsystem": "keyring", 00:52:42.416 "config": [ 00:52:42.416 { 00:52:42.416 "method": "keyring_file_add_key", 00:52:42.416 "params": { 00:52:42.416 "name": "key0", 00:52:42.416 "path": "/tmp/tmp.h8ud1MAmqT" 00:52:42.416 } 00:52:42.416 } 00:52:42.416 ] 00:52:42.416 }, 00:52:42.416 { 00:52:42.416 "subsystem": "iobuf", 00:52:42.416 "config": [ 00:52:42.416 { 00:52:42.416 "method": "iobuf_set_options", 00:52:42.416 "params": { 00:52:42.416 "enable_numa": false, 00:52:42.416 "large_bufsize": 135168, 00:52:42.416 "large_pool_count": 1024, 00:52:42.416 "small_bufsize": 8192, 00:52:42.416 "small_pool_count": 8192 00:52:42.416 } 00:52:42.416 } 00:52:42.416 ] 00:52:42.416 }, 00:52:42.416 { 00:52:42.416 "subsystem": "sock", 00:52:42.416 "config": [ 00:52:42.416 { 00:52:42.416 "method": "sock_set_default_impl", 00:52:42.416 "params": { 00:52:42.416 "impl_name": "posix" 00:52:42.416 } 00:52:42.416 }, 00:52:42.416 { 00:52:42.416 "method": "sock_impl_set_options", 00:52:42.416 "params": { 00:52:42.416 "enable_ktls": false, 00:52:42.416 "enable_placement_id": 0, 00:52:42.416 "enable_quickack": false, 00:52:42.416 "enable_recv_pipe": true, 00:52:42.416 "enable_zerocopy_send_client": false, 00:52:42.416 "enable_zerocopy_send_server": true, 00:52:42.416 "impl_name": "ssl", 00:52:42.416 "recv_buf_size": 4096, 00:52:42.416 "send_buf_size": 4096, 00:52:42.416 "tls_version": 0, 00:52:42.416 "zerocopy_threshold": 0 00:52:42.416 } 00:52:42.416 }, 00:52:42.416 { 00:52:42.416 "method": "sock_impl_set_options", 00:52:42.416 "params": { 00:52:42.416 "enable_ktls": false, 00:52:42.416 "enable_placement_id": 0, 00:52:42.416 "enable_quickack": false, 00:52:42.416 "enable_recv_pipe": true, 00:52:42.416 "enable_zerocopy_send_client": false, 00:52:42.416 "enable_zerocopy_send_server": true, 00:52:42.416 "impl_name": "posix", 00:52:42.416 "recv_buf_size": 2097152, 00:52:42.416 "send_buf_size": 2097152, 00:52:42.416 "tls_version": 0, 00:52:42.416 "zerocopy_threshold": 0 00:52:42.416 } 00:52:42.416 } 00:52:42.416 ] 00:52:42.416 }, 00:52:42.416 { 00:52:42.416 "subsystem": "vmd", 00:52:42.416 "config": [] 00:52:42.416 }, 00:52:42.416 { 00:52:42.416 "subsystem": "accel", 00:52:42.416 "config": [ 00:52:42.416 { 00:52:42.416 "method": "accel_set_options", 00:52:42.416 "params": { 00:52:42.416 "buf_count": 2048, 00:52:42.416 "large_cache_size": 16, 00:52:42.416 "sequence_count": 2048, 00:52:42.416 "small_cache_size": 128, 00:52:42.416 "task_count": 2048 00:52:42.416 } 00:52:42.416 } 00:52:42.416 ] 00:52:42.416 }, 00:52:42.416 { 00:52:42.416 "subsystem": "bdev", 00:52:42.416 "config": [ 00:52:42.416 { 00:52:42.416 "method": "bdev_set_options", 00:52:42.416 "params": { 00:52:42.416 "bdev_auto_examine": true, 00:52:42.416 "bdev_io_cache_size": 256, 00:52:42.416 "bdev_io_pool_size": 65535, 00:52:42.417 "iobuf_large_cache_size": 16, 00:52:42.417 "iobuf_small_cache_size": 128 00:52:42.417 } 00:52:42.417 }, 00:52:42.417 { 00:52:42.417 "method": "bdev_raid_set_options", 00:52:42.417 "params": { 00:52:42.417 "process_max_bandwidth_mb_sec": 0, 00:52:42.417 "process_window_size_kb": 1024 00:52:42.417 } 00:52:42.417 }, 00:52:42.417 { 00:52:42.417 "method": "bdev_iscsi_set_options", 00:52:42.417 "params": { 00:52:42.417 "timeout_sec": 30 00:52:42.417 } 00:52:42.417 }, 00:52:42.417 { 00:52:42.417 "method": "bdev_nvme_set_options", 00:52:42.417 "params": { 00:52:42.417 "action_on_timeout": "none", 00:52:42.417 "allow_accel_sequence": false, 00:52:42.417 "arbitration_burst": 0, 00:52:42.417 "bdev_retry_count": 3, 00:52:42.417 "ctrlr_loss_timeout_sec": 0, 00:52:42.417 "delay_cmd_submit": true, 00:52:42.417 "dhchap_dhgroups": [ 00:52:42.417 "null", 00:52:42.417 "ffdhe2048", 00:52:42.417 "ffdhe3072", 00:52:42.417 "ffdhe4096", 00:52:42.417 "ffdhe6144", 00:52:42.417 "ffdhe8192" 00:52:42.417 ], 00:52:42.417 "dhchap_digests": [ 00:52:42.417 "sha256", 00:52:42.417 "sha384", 00:52:42.417 "sha512" 00:52:42.417 ], 00:52:42.417 "disable_auto_failback": false, 00:52:42.417 "fast_io_fail_timeout_sec": 0, 00:52:42.417 "generate_uuids": false, 00:52:42.417 "high_priority_weight": 0, 00:52:42.417 "io_path_stat": false, 00:52:42.417 "io_queue_requests": 512, 00:52:42.417 "keep_alive_timeout_ms": 10000, 00:52:42.417 "low_priority_weight": 0, 00:52:42.417 "medium_priority_weight": 0, 00:52:42.417 "nvme_adminq_poll_period_us": 10000, 00:52:42.417 "nvme_error_stat": false, 00:52:42.417 "nvme_ioq_poll_period_us": 0, 00:52:42.417 "rdma_cm_event_timeout_ms": 0, 00:52:42.417 "rdma_max_cq_size": 0, 00:52:42.417 "rdma_srq_size": 0, 00:52:42.417 "reconnect_delay_sec": 0, 00:52:42.417 "timeout_admin_us": 0, 00:52:42.417 "timeout_us": 0, 00:52:42.417 "transport_ack_timeout": 0, 00:52:42.417 "transport_retry_count": 4, 00:52:42.417 "transport_tos": 0 00:52:42.417 } 00:52:42.417 }, 00:52:42.417 { 00:52:42.417 "method": "bdev_nvme_attach_controller", 00:52:42.417 "params": { 00:52:42.417 "adrfam": "IPv4", 00:52:42.417 "ctrlr_loss_timeout_sec": 0, 00:52:42.417 "ddgst": false, 00:52:42.417 "fast_io_fail_timeout_sec": 0, 00:52:42.417 "hdgst": false, 00:52:42.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:52:42.417 "multipath": "multipath", 00:52:42.417 "name": "TLSTEST", 00:52:42.417 "prchk_guard": false, 00:52:42.417 "prchk_reftag": false, 00:52:42.417 "psk": "key0", 00:52:42.417 "reconnect_delay_sec": 0, 00:52:42.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:52:42.417 "traddr": "10.0.0.3", 00:52:42.417 "trsvcid": "4420", 00:52:42.417 "trtype": "TCP" 00:52:42.417 } 00:52:42.417 }, 00:52:42.417 { 00:52:42.417 "method": "bdev_nvme_set_hotplug", 00:52:42.417 "params": { 00:52:42.417 "enable": false, 00:52:42.417 "period_us": 100000 00:52:42.417 } 00:52:42.417 }, 00:52:42.417 { 00:52:42.417 "method": "bdev_wait_for_examine" 00:52:42.417 } 00:52:42.417 ] 00:52:42.417 }, 00:52:42.417 { 00:52:42.417 "subsystem": "nbd", 00:52:42.417 "config": [] 00:52:42.417 } 00:52:42.417 ] 00:52:42.417 }' 00:52:42.417 [2024-12-09 10:03:49.334131] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:42.417 [2024-12-09 10:03:49.334333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84587 ] 00:52:42.676 [2024-12-09 10:03:49.499204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:42.676 [2024-12-09 10:03:49.553770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:52:42.676 [2024-12-09 10:03:49.714331] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:52:43.245 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:43.245 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:43.245 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:52:43.504 Running I/O for 10 seconds... 00:52:45.386 5376.00 IOPS, 21.00 MiB/s [2024-12-09T10:03:53.368Z] 5196.50 IOPS, 20.30 MiB/s [2024-12-09T10:03:54.746Z] 5034.67 IOPS, 19.67 MiB/s [2024-12-09T10:03:55.687Z] 4942.50 IOPS, 19.31 MiB/s [2024-12-09T10:03:56.628Z] 4894.40 IOPS, 19.12 MiB/s [2024-12-09T10:03:57.584Z] 4871.50 IOPS, 19.03 MiB/s [2024-12-09T10:03:58.522Z] 4918.86 IOPS, 19.21 MiB/s [2024-12-09T10:03:59.459Z] 4896.00 IOPS, 19.12 MiB/s [2024-12-09T10:04:00.399Z] 4874.22 IOPS, 19.04 MiB/s [2024-12-09T10:04:00.399Z] 4882.30 IOPS, 19.07 MiB/s 00:52:53.355 Latency(us) 00:52:53.355 [2024-12-09T10:04:00.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:53.355 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:52:53.355 Verification LBA range: start 0x0 length 0x2000 00:52:53.355 TLSTESTn1 : 10.02 4885.48 19.08 0.00 0.00 26148.60 4807.88 18201.26 00:52:53.355 [2024-12-09T10:04:00.399Z] =================================================================================================================== 00:52:53.355 [2024-12-09T10:04:00.399Z] Total : 4885.48 19.08 0.00 0.00 26148.60 4807.88 18201.26 00:52:53.355 { 00:52:53.355 "results": [ 00:52:53.355 { 00:52:53.355 "job": "TLSTESTn1", 00:52:53.355 "core_mask": "0x4", 00:52:53.355 "workload": "verify", 00:52:53.355 "status": "finished", 00:52:53.355 "verify_range": { 00:52:53.355 "start": 0, 00:52:53.355 "length": 8192 00:52:53.355 }, 00:52:53.355 "queue_depth": 128, 00:52:53.355 "io_size": 4096, 00:52:53.355 "runtime": 10.019068, 00:52:53.355 "iops": 4885.484358425355, 00:52:53.355 "mibps": 19.083923275099043, 00:52:53.355 "io_failed": 0, 00:52:53.355 "io_timeout": 0, 00:52:53.355 "avg_latency_us": 26148.60436053161, 00:52:53.355 "min_latency_us": 4807.881222707423, 00:52:53.355 "max_latency_us": 18201.26462882096 00:52:53.355 } 00:52:53.355 ], 00:52:53.355 "core_count": 1 00:52:53.355 } 00:52:53.355 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:52:53.355 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84587 00:52:53.355 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84587 ']' 00:52:53.355 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84587 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84587 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:52:53.615 killing process with pid 84587 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84587' 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84587 00:52:53.615 Received shutdown signal, test time was about 10.000000 seconds 00:52:53.615 00:52:53.615 Latency(us) 00:52:53.615 [2024-12-09T10:04:00.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:53.615 [2024-12-09T10:04:00.659Z] =================================================================================================================== 00:52:53.615 [2024-12-09T10:04:00.659Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84587 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84543 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84543 ']' 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84543 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84543 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:53.615 killing process with pid 84543 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84543' 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84543 00:52:53.615 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84543 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84735 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84735 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84735 ']' 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:53.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:53.875 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:53.875 [2024-12-09 10:04:00.901685] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:53.875 [2024-12-09 10:04:00.901766] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:54.134 [2024-12-09 10:04:01.055389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:54.134 [2024-12-09 10:04:01.111695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:54.134 [2024-12-09 10:04:01.111746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:54.134 [2024-12-09 10:04:01.111753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:54.134 [2024-12-09 10:04:01.111759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:54.134 [2024-12-09 10:04:01.111764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:54.134 [2024-12-09 10:04:01.112050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:55.116 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:55.116 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:55.116 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:52:55.116 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:55.116 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:55.116 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:55.116 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.h8ud1MAmqT 00:52:55.116 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.h8ud1MAmqT 00:52:55.116 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:52:55.116 [2024-12-09 10:04:02.117862] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:55.116 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:52:55.376 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:52:55.635 [2024-12-09 10:04:02.565109] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:52:55.635 [2024-12-09 10:04:02.565322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:55.635 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:52:55.895 malloc0 00:52:55.895 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:52:56.156 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.h8ud1MAmqT 00:52:56.416 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:52:56.676 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:52:56.676 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84845 00:52:56.676 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:56.676 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84845 /var/tmp/bdevperf.sock 00:52:56.676 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84845 ']' 00:52:56.676 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:56.676 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:56.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:56.676 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:56.676 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:56.676 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:56.676 [2024-12-09 10:04:03.523329] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:56.676 [2024-12-09 10:04:03.523408] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84845 ] 00:52:56.676 [2024-12-09 10:04:03.652181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:56.935 [2024-12-09 10:04:03.718900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:57.502 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:57.502 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:52:57.502 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.h8ud1MAmqT 00:52:57.761 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:52:58.020 [2024-12-09 10:04:04.871129] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:52:58.020 nvme0n1 00:52:58.020 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:52:58.278 Running I/O for 1 seconds... 00:52:59.214 6385.00 IOPS, 24.94 MiB/s 00:52:59.214 Latency(us) 00:52:59.214 [2024-12-09T10:04:06.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:59.214 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:52:59.214 Verification LBA range: start 0x0 length 0x2000 00:52:59.214 nvme0n1 : 1.01 6446.04 25.18 0.00 0.00 19727.04 3892.09 18315.74 00:52:59.214 [2024-12-09T10:04:06.258Z] =================================================================================================================== 00:52:59.214 [2024-12-09T10:04:06.258Z] Total : 6446.04 25.18 0.00 0.00 19727.04 3892.09 18315.74 00:52:59.214 { 00:52:59.214 "results": [ 00:52:59.214 { 00:52:59.214 "job": "nvme0n1", 00:52:59.214 "core_mask": "0x2", 00:52:59.214 "workload": "verify", 00:52:59.214 "status": "finished", 00:52:59.214 "verify_range": { 00:52:59.215 "start": 0, 00:52:59.215 "length": 8192 00:52:59.215 }, 00:52:59.215 "queue_depth": 128, 00:52:59.215 "io_size": 4096, 00:52:59.215 "runtime": 1.010543, 00:52:59.215 "iops": 6446.039406536882, 00:52:59.215 "mibps": 25.179841431784695, 00:52:59.215 "io_failed": 0, 00:52:59.215 "io_timeout": 0, 00:52:59.215 "avg_latency_us": 19727.040870520064, 00:52:59.215 "min_latency_us": 3892.0943231441047, 00:52:59.215 "max_latency_us": 18315.737991266375 00:52:59.215 } 00:52:59.215 ], 00:52:59.215 "core_count": 1 00:52:59.215 } 00:52:59.215 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84845 00:52:59.215 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84845 ']' 00:52:59.215 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84845 00:52:59.215 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:59.215 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:59.215 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84845 00:52:59.215 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:59.215 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:59.215 killing process with pid 84845 00:52:59.215 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84845' 00:52:59.215 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84845 00:52:59.215 Received shutdown signal, test time was about 1.000000 seconds 00:52:59.215 00:52:59.215 Latency(us) 00:52:59.215 [2024-12-09T10:04:06.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:59.215 [2024-12-09T10:04:06.259Z] =================================================================================================================== 00:52:59.215 [2024-12-09T10:04:06.259Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:59.215 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84845 00:52:59.480 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84735 00:52:59.480 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84735 ']' 00:52:59.480 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84735 00:52:59.480 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:52:59.480 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:59.480 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84735 00:52:59.480 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:59.480 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:59.480 killing process with pid 84735 00:52:59.480 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84735' 00:52:59.480 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84735 00:52:59.480 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84735 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84921 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84921 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84921 ']' 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:59.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:59.746 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:52:59.746 [2024-12-09 10:04:06.595503] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:52:59.746 [2024-12-09 10:04:06.595589] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:59.746 [2024-12-09 10:04:06.744832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:00.005 [2024-12-09 10:04:06.794351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:00.005 [2024-12-09 10:04:06.794404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:00.005 [2024-12-09 10:04:06.794411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:00.005 [2024-12-09 10:04:06.794415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:00.005 [2024-12-09 10:04:06.794420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:00.005 [2024-12-09 10:04:06.794686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:00.575 [2024-12-09 10:04:07.547808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:00.575 malloc0 00:53:00.575 [2024-12-09 10:04:07.576494] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:53:00.575 [2024-12-09 10:04:07.576693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84971 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84971 /var/tmp/bdevperf.sock 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84971 ']' 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:00.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:00.575 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:53:00.835 [2024-12-09 10:04:07.663361] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:00.835 [2024-12-09 10:04:07.663444] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84971 ] 00:53:00.835 [2024-12-09 10:04:07.793492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:00.835 [2024-12-09 10:04:07.842398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:01.773 10:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:01.773 10:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:53:01.773 10:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.h8ud1MAmqT 00:53:01.773 10:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:53:02.032 [2024-12-09 10:04:08.979877] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:53:02.032 nvme0n1 00:53:02.032 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:53:02.291 Running I/O for 1 seconds... 00:53:03.230 5970.00 IOPS, 23.32 MiB/s 00:53:03.230 Latency(us) 00:53:03.230 [2024-12-09T10:04:10.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:03.230 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:53:03.230 Verification LBA range: start 0x0 length 0x2000 00:53:03.230 nvme0n1 : 1.01 6028.98 23.55 0.00 0.00 21084.26 3920.71 19803.89 00:53:03.230 [2024-12-09T10:04:10.274Z] =================================================================================================================== 00:53:03.230 [2024-12-09T10:04:10.274Z] Total : 6028.98 23.55 0.00 0.00 21084.26 3920.71 19803.89 00:53:03.230 { 00:53:03.230 "results": [ 00:53:03.230 { 00:53:03.230 "job": "nvme0n1", 00:53:03.230 "core_mask": "0x2", 00:53:03.230 "workload": "verify", 00:53:03.230 "status": "finished", 00:53:03.230 "verify_range": { 00:53:03.230 "start": 0, 00:53:03.230 "length": 8192 00:53:03.230 }, 00:53:03.230 "queue_depth": 128, 00:53:03.230 "io_size": 4096, 00:53:03.230 "runtime": 1.011448, 00:53:03.230 "iops": 6028.980234277986, 00:53:03.230 "mibps": 23.550704040148382, 00:53:03.230 "io_failed": 0, 00:53:03.230 "io_timeout": 0, 00:53:03.230 "avg_latency_us": 21084.264435472436, 00:53:03.230 "min_latency_us": 3920.7126637554584, 00:53:03.230 "max_latency_us": 19803.891703056768 00:53:03.230 } 00:53:03.230 ], 00:53:03.230 "core_count": 1 00:53:03.230 } 00:53:03.230 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:53:03.230 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:03.230 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:03.490 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:03.490 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:53:03.490 "subsystems": [ 00:53:03.490 { 00:53:03.490 "subsystem": "keyring", 00:53:03.490 "config": [ 00:53:03.490 { 00:53:03.490 "method": "keyring_file_add_key", 00:53:03.490 "params": { 00:53:03.490 "name": "key0", 00:53:03.490 "path": "/tmp/tmp.h8ud1MAmqT" 00:53:03.490 } 00:53:03.490 } 00:53:03.490 ] 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "subsystem": "iobuf", 00:53:03.490 "config": [ 00:53:03.490 { 00:53:03.490 "method": "iobuf_set_options", 00:53:03.490 "params": { 00:53:03.490 "enable_numa": false, 00:53:03.490 "large_bufsize": 135168, 00:53:03.490 "large_pool_count": 1024, 00:53:03.490 "small_bufsize": 8192, 00:53:03.490 "small_pool_count": 8192 00:53:03.490 } 00:53:03.490 } 00:53:03.490 ] 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "subsystem": "sock", 00:53:03.490 "config": [ 00:53:03.490 { 00:53:03.490 "method": "sock_set_default_impl", 00:53:03.490 "params": { 00:53:03.490 "impl_name": "posix" 00:53:03.490 } 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "method": "sock_impl_set_options", 00:53:03.490 "params": { 00:53:03.490 "enable_ktls": false, 00:53:03.490 "enable_placement_id": 0, 00:53:03.490 "enable_quickack": false, 00:53:03.490 "enable_recv_pipe": true, 00:53:03.490 "enable_zerocopy_send_client": false, 00:53:03.490 "enable_zerocopy_send_server": true, 00:53:03.490 "impl_name": "ssl", 00:53:03.490 "recv_buf_size": 4096, 00:53:03.490 "send_buf_size": 4096, 00:53:03.490 "tls_version": 0, 00:53:03.490 "zerocopy_threshold": 0 00:53:03.490 } 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "method": "sock_impl_set_options", 00:53:03.490 "params": { 00:53:03.490 "enable_ktls": false, 00:53:03.490 "enable_placement_id": 0, 00:53:03.490 "enable_quickack": false, 00:53:03.490 "enable_recv_pipe": true, 00:53:03.490 "enable_zerocopy_send_client": false, 00:53:03.490 "enable_zerocopy_send_server": true, 00:53:03.490 "impl_name": "posix", 00:53:03.490 "recv_buf_size": 2097152, 00:53:03.490 "send_buf_size": 2097152, 00:53:03.490 "tls_version": 0, 00:53:03.490 "zerocopy_threshold": 0 00:53:03.490 } 00:53:03.490 } 00:53:03.490 ] 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "subsystem": "vmd", 00:53:03.490 "config": [] 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "subsystem": "accel", 00:53:03.490 "config": [ 00:53:03.490 { 00:53:03.490 "method": "accel_set_options", 00:53:03.490 "params": { 00:53:03.490 "buf_count": 2048, 00:53:03.490 "large_cache_size": 16, 00:53:03.490 "sequence_count": 2048, 00:53:03.490 "small_cache_size": 128, 00:53:03.490 "task_count": 2048 00:53:03.490 } 00:53:03.490 } 00:53:03.490 ] 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "subsystem": "bdev", 00:53:03.490 "config": [ 00:53:03.490 { 00:53:03.490 "method": "bdev_set_options", 00:53:03.490 "params": { 00:53:03.490 "bdev_auto_examine": true, 00:53:03.490 "bdev_io_cache_size": 256, 00:53:03.490 "bdev_io_pool_size": 65535, 00:53:03.490 "iobuf_large_cache_size": 16, 00:53:03.490 "iobuf_small_cache_size": 128 00:53:03.490 } 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "method": "bdev_raid_set_options", 00:53:03.490 "params": { 00:53:03.490 "process_max_bandwidth_mb_sec": 0, 00:53:03.490 "process_window_size_kb": 1024 00:53:03.490 } 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "method": "bdev_iscsi_set_options", 00:53:03.490 "params": { 00:53:03.490 "timeout_sec": 30 00:53:03.490 } 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "method": "bdev_nvme_set_options", 00:53:03.490 "params": { 00:53:03.490 "action_on_timeout": "none", 00:53:03.490 "allow_accel_sequence": false, 00:53:03.490 "arbitration_burst": 0, 00:53:03.490 "bdev_retry_count": 3, 00:53:03.490 "ctrlr_loss_timeout_sec": 0, 00:53:03.490 "delay_cmd_submit": true, 00:53:03.490 "dhchap_dhgroups": [ 00:53:03.490 "null", 00:53:03.490 "ffdhe2048", 00:53:03.490 "ffdhe3072", 00:53:03.490 "ffdhe4096", 00:53:03.490 "ffdhe6144", 00:53:03.490 "ffdhe8192" 00:53:03.490 ], 00:53:03.490 "dhchap_digests": [ 00:53:03.490 "sha256", 00:53:03.490 "sha384", 00:53:03.490 "sha512" 00:53:03.490 ], 00:53:03.490 "disable_auto_failback": false, 00:53:03.490 "fast_io_fail_timeout_sec": 0, 00:53:03.490 "generate_uuids": false, 00:53:03.490 "high_priority_weight": 0, 00:53:03.490 "io_path_stat": false, 00:53:03.490 "io_queue_requests": 0, 00:53:03.490 "keep_alive_timeout_ms": 10000, 00:53:03.490 "low_priority_weight": 0, 00:53:03.490 "medium_priority_weight": 0, 00:53:03.490 "nvme_adminq_poll_period_us": 10000, 00:53:03.490 "nvme_error_stat": false, 00:53:03.490 "nvme_ioq_poll_period_us": 0, 00:53:03.490 "rdma_cm_event_timeout_ms": 0, 00:53:03.490 "rdma_max_cq_size": 0, 00:53:03.490 "rdma_srq_size": 0, 00:53:03.490 "reconnect_delay_sec": 0, 00:53:03.490 "timeout_admin_us": 0, 00:53:03.490 "timeout_us": 0, 00:53:03.490 "transport_ack_timeout": 0, 00:53:03.490 "transport_retry_count": 4, 00:53:03.490 "transport_tos": 0 00:53:03.490 } 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "method": "bdev_nvme_set_hotplug", 00:53:03.490 "params": { 00:53:03.490 "enable": false, 00:53:03.490 "period_us": 100000 00:53:03.490 } 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "method": "bdev_malloc_create", 00:53:03.490 "params": { 00:53:03.490 "block_size": 4096, 00:53:03.490 "dif_is_head_of_md": false, 00:53:03.490 "dif_pi_format": 0, 00:53:03.490 "dif_type": 0, 00:53:03.490 "md_size": 0, 00:53:03.490 "name": "malloc0", 00:53:03.490 "num_blocks": 8192, 00:53:03.490 "optimal_io_boundary": 0, 00:53:03.490 "physical_block_size": 4096, 00:53:03.490 "uuid": "90688d0c-dca3-46c7-8985-72824d7a9dc4" 00:53:03.490 } 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "method": "bdev_wait_for_examine" 00:53:03.490 } 00:53:03.490 ] 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "subsystem": "nbd", 00:53:03.490 "config": [] 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "subsystem": "scheduler", 00:53:03.490 "config": [ 00:53:03.490 { 00:53:03.490 "method": "framework_set_scheduler", 00:53:03.490 "params": { 00:53:03.490 "name": "static" 00:53:03.490 } 00:53:03.490 } 00:53:03.490 ] 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "subsystem": "nvmf", 00:53:03.490 "config": [ 00:53:03.490 { 00:53:03.490 "method": "nvmf_set_config", 00:53:03.490 "params": { 00:53:03.490 "admin_cmd_passthru": { 00:53:03.490 "identify_ctrlr": false 00:53:03.490 }, 00:53:03.490 "dhchap_dhgroups": [ 00:53:03.490 "null", 00:53:03.490 "ffdhe2048", 00:53:03.490 "ffdhe3072", 00:53:03.490 "ffdhe4096", 00:53:03.490 "ffdhe6144", 00:53:03.490 "ffdhe8192" 00:53:03.490 ], 00:53:03.490 "dhchap_digests": [ 00:53:03.490 "sha256", 00:53:03.490 "sha384", 00:53:03.490 "sha512" 00:53:03.490 ], 00:53:03.490 "discovery_filter": "match_any" 00:53:03.490 } 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "method": "nvmf_set_max_subsystems", 00:53:03.490 "params": { 00:53:03.490 "max_subsystems": 1024 00:53:03.490 } 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "method": "nvmf_set_crdt", 00:53:03.490 "params": { 00:53:03.490 "crdt1": 0, 00:53:03.490 "crdt2": 0, 00:53:03.490 "crdt3": 0 00:53:03.490 } 00:53:03.490 }, 00:53:03.490 { 00:53:03.490 "method": "nvmf_create_transport", 00:53:03.490 "params": { 00:53:03.490 "abort_timeout_sec": 1, 00:53:03.490 "ack_timeout": 0, 00:53:03.490 "buf_cache_size": 4294967295, 00:53:03.490 "c2h_success": false, 00:53:03.491 "data_wr_pool_size": 0, 00:53:03.491 "dif_insert_or_strip": false, 00:53:03.491 "in_capsule_data_size": 4096, 00:53:03.491 "io_unit_size": 131072, 00:53:03.491 "max_aq_depth": 128, 00:53:03.491 "max_io_qpairs_per_ctrlr": 127, 00:53:03.491 "max_io_size": 131072, 00:53:03.491 "max_queue_depth": 128, 00:53:03.491 "num_shared_buffers": 511, 00:53:03.491 "sock_priority": 0, 00:53:03.491 "trtype": "TCP", 00:53:03.491 "zcopy": false 00:53:03.491 } 00:53:03.491 }, 00:53:03.491 { 00:53:03.491 "method": "nvmf_create_subsystem", 00:53:03.491 "params": { 00:53:03.491 "allow_any_host": false, 00:53:03.491 "ana_reporting": false, 00:53:03.491 "max_cntlid": 65519, 00:53:03.491 "max_namespaces": 32, 00:53:03.491 "min_cntlid": 1, 00:53:03.491 "model_number": "SPDK bdev Controller", 00:53:03.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:53:03.491 "serial_number": "00000000000000000000" 00:53:03.491 } 00:53:03.491 }, 00:53:03.491 { 00:53:03.491 "method": "nvmf_subsystem_add_host", 00:53:03.491 "params": { 00:53:03.491 "host": "nqn.2016-06.io.spdk:host1", 00:53:03.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:53:03.491 "psk": "key0" 00:53:03.491 } 00:53:03.491 }, 00:53:03.491 { 00:53:03.491 "method": "nvmf_subsystem_add_ns", 00:53:03.491 "params": { 00:53:03.491 "namespace": { 00:53:03.491 "bdev_name": "malloc0", 00:53:03.491 "nguid": "90688D0CDCA346C7898572824D7A9DC4", 00:53:03.491 "no_auto_visible": false, 00:53:03.491 "nsid": 1, 00:53:03.491 "uuid": "90688d0c-dca3-46c7-8985-72824d7a9dc4" 00:53:03.491 }, 00:53:03.491 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:53:03.491 } 00:53:03.491 }, 00:53:03.491 { 00:53:03.491 "method": "nvmf_subsystem_add_listener", 00:53:03.491 "params": { 00:53:03.491 "listen_address": { 00:53:03.491 "adrfam": "IPv4", 00:53:03.491 "traddr": "10.0.0.3", 00:53:03.491 "trsvcid": "4420", 00:53:03.491 "trtype": "TCP" 00:53:03.491 }, 00:53:03.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:53:03.491 "secure_channel": false, 00:53:03.491 "sock_impl": "ssl" 00:53:03.491 } 00:53:03.491 } 00:53:03.491 ] 00:53:03.491 } 00:53:03.491 ] 00:53:03.491 }' 00:53:03.491 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:53:03.751 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:53:03.751 "subsystems": [ 00:53:03.751 { 00:53:03.751 "subsystem": "keyring", 00:53:03.751 "config": [ 00:53:03.751 { 00:53:03.751 "method": "keyring_file_add_key", 00:53:03.751 "params": { 00:53:03.751 "name": "key0", 00:53:03.751 "path": "/tmp/tmp.h8ud1MAmqT" 00:53:03.751 } 00:53:03.751 } 00:53:03.751 ] 00:53:03.751 }, 00:53:03.751 { 00:53:03.751 "subsystem": "iobuf", 00:53:03.751 "config": [ 00:53:03.751 { 00:53:03.751 "method": "iobuf_set_options", 00:53:03.751 "params": { 00:53:03.751 "enable_numa": false, 00:53:03.751 "large_bufsize": 135168, 00:53:03.751 "large_pool_count": 1024, 00:53:03.751 "small_bufsize": 8192, 00:53:03.751 "small_pool_count": 8192 00:53:03.751 } 00:53:03.751 } 00:53:03.751 ] 00:53:03.751 }, 00:53:03.751 { 00:53:03.751 "subsystem": "sock", 00:53:03.751 "config": [ 00:53:03.751 { 00:53:03.751 "method": "sock_set_default_impl", 00:53:03.751 "params": { 00:53:03.751 "impl_name": "posix" 00:53:03.751 } 00:53:03.751 }, 00:53:03.751 { 00:53:03.751 "method": "sock_impl_set_options", 00:53:03.751 "params": { 00:53:03.751 "enable_ktls": false, 00:53:03.751 "enable_placement_id": 0, 00:53:03.751 "enable_quickack": false, 00:53:03.751 "enable_recv_pipe": true, 00:53:03.751 "enable_zerocopy_send_client": false, 00:53:03.751 "enable_zerocopy_send_server": true, 00:53:03.751 "impl_name": "ssl", 00:53:03.751 "recv_buf_size": 4096, 00:53:03.751 "send_buf_size": 4096, 00:53:03.751 "tls_version": 0, 00:53:03.751 "zerocopy_threshold": 0 00:53:03.751 } 00:53:03.751 }, 00:53:03.751 { 00:53:03.751 "method": "sock_impl_set_options", 00:53:03.751 "params": { 00:53:03.751 "enable_ktls": false, 00:53:03.751 "enable_placement_id": 0, 00:53:03.751 "enable_quickack": false, 00:53:03.751 "enable_recv_pipe": true, 00:53:03.751 "enable_zerocopy_send_client": false, 00:53:03.751 "enable_zerocopy_send_server": true, 00:53:03.751 "impl_name": "posix", 00:53:03.751 "recv_buf_size": 2097152, 00:53:03.751 "send_buf_size": 2097152, 00:53:03.751 "tls_version": 0, 00:53:03.751 "zerocopy_threshold": 0 00:53:03.751 } 00:53:03.751 } 00:53:03.751 ] 00:53:03.751 }, 00:53:03.751 { 00:53:03.751 "subsystem": "vmd", 00:53:03.751 "config": [] 00:53:03.751 }, 00:53:03.751 { 00:53:03.751 "subsystem": "accel", 00:53:03.751 "config": [ 00:53:03.751 { 00:53:03.751 "method": "accel_set_options", 00:53:03.751 "params": { 00:53:03.751 "buf_count": 2048, 00:53:03.751 "large_cache_size": 16, 00:53:03.751 "sequence_count": 2048, 00:53:03.751 "small_cache_size": 128, 00:53:03.751 "task_count": 2048 00:53:03.751 } 00:53:03.751 } 00:53:03.751 ] 00:53:03.751 }, 00:53:03.751 { 00:53:03.751 "subsystem": "bdev", 00:53:03.751 "config": [ 00:53:03.751 { 00:53:03.751 "method": "bdev_set_options", 00:53:03.751 "params": { 00:53:03.751 "bdev_auto_examine": true, 00:53:03.751 "bdev_io_cache_size": 256, 00:53:03.751 "bdev_io_pool_size": 65535, 00:53:03.751 "iobuf_large_cache_size": 16, 00:53:03.751 "iobuf_small_cache_size": 128 00:53:03.751 } 00:53:03.751 }, 00:53:03.751 { 00:53:03.751 "method": "bdev_raid_set_options", 00:53:03.751 "params": { 00:53:03.751 "process_max_bandwidth_mb_sec": 0, 00:53:03.751 "process_window_size_kb": 1024 00:53:03.751 } 00:53:03.751 }, 00:53:03.751 { 00:53:03.751 "method": "bdev_iscsi_set_options", 00:53:03.751 "params": { 00:53:03.751 "timeout_sec": 30 00:53:03.751 } 00:53:03.751 }, 00:53:03.751 { 00:53:03.751 "method": "bdev_nvme_set_options", 00:53:03.751 "params": { 00:53:03.751 "action_on_timeout": "none", 00:53:03.751 "allow_accel_sequence": false, 00:53:03.751 "arbitration_burst": 0, 00:53:03.751 "bdev_retry_count": 3, 00:53:03.751 "ctrlr_loss_timeout_sec": 0, 00:53:03.751 "delay_cmd_submit": true, 00:53:03.751 "dhchap_dhgroups": [ 00:53:03.751 "null", 00:53:03.751 "ffdhe2048", 00:53:03.751 "ffdhe3072", 00:53:03.751 "ffdhe4096", 00:53:03.751 "ffdhe6144", 00:53:03.751 "ffdhe8192" 00:53:03.751 ], 00:53:03.751 "dhchap_digests": [ 00:53:03.752 "sha256", 00:53:03.752 "sha384", 00:53:03.752 "sha512" 00:53:03.752 ], 00:53:03.752 "disable_auto_failback": false, 00:53:03.752 "fast_io_fail_timeout_sec": 0, 00:53:03.752 "generate_uuids": false, 00:53:03.752 "high_priority_weight": 0, 00:53:03.752 "io_path_stat": false, 00:53:03.752 "io_queue_requests": 512, 00:53:03.752 "keep_alive_timeout_ms": 10000, 00:53:03.752 "low_priority_weight": 0, 00:53:03.752 "medium_priority_weight": 0, 00:53:03.752 "nvme_adminq_poll_period_us": 10000, 00:53:03.752 "nvme_error_stat": false, 00:53:03.752 "nvme_ioq_poll_period_us": 0, 00:53:03.752 "rdma_cm_event_timeout_ms": 0, 00:53:03.752 "rdma_max_cq_size": 0, 00:53:03.752 "rdma_srq_size": 0, 00:53:03.752 "reconnect_delay_sec": 0, 00:53:03.752 "timeout_admin_us": 0, 00:53:03.752 "timeout_us": 0, 00:53:03.752 "transport_ack_timeout": 0, 00:53:03.752 "transport_retry_count": 4, 00:53:03.752 "transport_tos": 0 00:53:03.752 } 00:53:03.752 }, 00:53:03.752 { 00:53:03.752 "method": "bdev_nvme_attach_controller", 00:53:03.752 "params": { 00:53:03.752 "adrfam": "IPv4", 00:53:03.752 "ctrlr_loss_timeout_sec": 0, 00:53:03.752 "ddgst": false, 00:53:03.752 "fast_io_fail_timeout_sec": 0, 00:53:03.752 "hdgst": false, 00:53:03.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:53:03.752 "multipath": "multipath", 00:53:03.752 "name": "nvme0", 00:53:03.752 "prchk_guard": false, 00:53:03.752 "prchk_reftag": false, 00:53:03.752 "psk": "key0", 00:53:03.752 "reconnect_delay_sec": 0, 00:53:03.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:53:03.752 "traddr": "10.0.0.3", 00:53:03.752 "trsvcid": "4420", 00:53:03.752 "trtype": "TCP" 00:53:03.752 } 00:53:03.752 }, 00:53:03.752 { 00:53:03.752 "method": "bdev_nvme_set_hotplug", 00:53:03.752 "params": { 00:53:03.752 "enable": false, 00:53:03.752 "period_us": 100000 00:53:03.752 } 00:53:03.752 }, 00:53:03.752 { 00:53:03.752 "method": "bdev_enable_histogram", 00:53:03.752 "params": { 00:53:03.752 "enable": true, 00:53:03.752 "name": "nvme0n1" 00:53:03.752 } 00:53:03.752 }, 00:53:03.752 { 00:53:03.752 "method": "bdev_wait_for_examine" 00:53:03.752 } 00:53:03.752 ] 00:53:03.752 }, 00:53:03.752 { 00:53:03.752 "subsystem": "nbd", 00:53:03.752 "config": [] 00:53:03.752 } 00:53:03.752 ] 00:53:03.752 }' 00:53:03.752 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84971 00:53:03.752 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84971 ']' 00:53:03.752 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84971 00:53:03.752 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:53:03.752 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:03.752 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84971 00:53:03.752 killing process with pid 84971 00:53:03.752 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:03.752 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:03.752 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84971' 00:53:03.752 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84971 00:53:03.752 Received shutdown signal, test time was about 1.000000 seconds 00:53:03.752 00:53:03.752 Latency(us) 00:53:03.752 [2024-12-09T10:04:10.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:03.752 [2024-12-09T10:04:10.796Z] =================================================================================================================== 00:53:03.752 [2024-12-09T10:04:10.796Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:03.752 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84971 00:53:04.030 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84921 00:53:04.030 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84921 ']' 00:53:04.030 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84921 00:53:04.030 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:53:04.030 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:04.030 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84921 00:53:04.030 killing process with pid 84921 00:53:04.030 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:04.030 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:04.030 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84921' 00:53:04.030 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84921 00:53:04.030 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84921 00:53:04.291 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:53:04.291 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:04.291 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:04.291 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:04.291 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:53:04.291 "subsystems": [ 00:53:04.291 { 00:53:04.291 "subsystem": "keyring", 00:53:04.291 "config": [ 00:53:04.291 { 00:53:04.291 "method": "keyring_file_add_key", 00:53:04.291 "params": { 00:53:04.291 "name": "key0", 00:53:04.291 "path": "/tmp/tmp.h8ud1MAmqT" 00:53:04.291 } 00:53:04.291 } 00:53:04.291 ] 00:53:04.291 }, 00:53:04.291 { 00:53:04.291 "subsystem": "iobuf", 00:53:04.291 "config": [ 00:53:04.291 { 00:53:04.291 "method": "iobuf_set_options", 00:53:04.291 "params": { 00:53:04.291 "enable_numa": false, 00:53:04.291 "large_bufsize": 135168, 00:53:04.291 "large_pool_count": 1024, 00:53:04.291 "small_bufsize": 8192, 00:53:04.291 "small_pool_count": 8192 00:53:04.291 } 00:53:04.291 } 00:53:04.291 ] 00:53:04.291 }, 00:53:04.291 { 00:53:04.291 "subsystem": "sock", 00:53:04.291 "config": [ 00:53:04.291 { 00:53:04.291 "method": "sock_set_default_impl", 00:53:04.291 "params": { 00:53:04.291 "impl_name": "posix" 00:53:04.291 } 00:53:04.291 }, 00:53:04.291 { 00:53:04.291 "method": "sock_impl_set_options", 00:53:04.291 "params": { 00:53:04.291 "enable_ktls": false, 00:53:04.291 "enable_placement_id": 0, 00:53:04.291 "enable_quickack": false, 00:53:04.291 "enable_recv_pipe": true, 00:53:04.291 "enable_zerocopy_send_client": false, 00:53:04.291 "enable_zerocopy_send_server": true, 00:53:04.291 "impl_name": "ssl", 00:53:04.291 "recv_buf_size": 4096, 00:53:04.291 "send_buf_size": 4096, 00:53:04.291 "tls_version": 0, 00:53:04.291 "zerocopy_threshold": 0 00:53:04.291 } 00:53:04.291 }, 00:53:04.291 { 00:53:04.291 "method": "sock_impl_set_options", 00:53:04.291 "params": { 00:53:04.291 "enable_ktls": false, 00:53:04.291 "enable_placement_id": 0, 00:53:04.291 "enable_quickack": false, 00:53:04.291 "enable_recv_pipe": true, 00:53:04.291 "enable_zerocopy_send_client": false, 00:53:04.291 "enable_zerocopy_send_server": true, 00:53:04.291 "impl_name": "posix", 00:53:04.291 "recv_buf_size": 2097152, 00:53:04.291 "send_buf_size": 2097152, 00:53:04.291 "tls_version": 0, 00:53:04.291 "zerocopy_threshold": 0 00:53:04.291 } 00:53:04.291 } 00:53:04.291 ] 00:53:04.291 }, 00:53:04.291 { 00:53:04.291 "subsystem": "vmd", 00:53:04.291 "config": [] 00:53:04.291 }, 00:53:04.291 { 00:53:04.291 "subsystem": "accel", 00:53:04.291 "config": [ 00:53:04.291 { 00:53:04.291 "method": "accel_set_options", 00:53:04.291 "params": { 00:53:04.291 "buf_count": 2048, 00:53:04.291 "large_cache_size": 16, 00:53:04.291 "sequence_count": 2048, 00:53:04.291 "small_cache_size": 128, 00:53:04.291 "task_count": 2048 00:53:04.291 } 00:53:04.291 } 00:53:04.291 ] 00:53:04.291 }, 00:53:04.291 { 00:53:04.291 "subsystem": "bdev", 00:53:04.291 "config": [ 00:53:04.291 { 00:53:04.291 "method": "bdev_set_options", 00:53:04.291 "params": { 00:53:04.291 "bdev_auto_examine": true, 00:53:04.291 "bdev_io_cache_size": 256, 00:53:04.291 "bdev_io_pool_size": 65535, 00:53:04.291 "iobuf_large_cache_size": 16, 00:53:04.291 "iobuf_small_cache_size": 128 00:53:04.291 } 00:53:04.291 }, 00:53:04.291 { 00:53:04.291 "method": "bdev_raid_set_options", 00:53:04.291 "params": { 00:53:04.291 "process_max_bandwidth_mb_sec": 0, 00:53:04.291 "process_window_size_kb": 1024 00:53:04.291 } 00:53:04.291 }, 00:53:04.291 { 00:53:04.291 "method": "bdev_iscsi_set_options", 00:53:04.291 "params": { 00:53:04.291 "timeout_sec": 30 00:53:04.291 } 00:53:04.291 }, 00:53:04.291 { 00:53:04.291 "method": "bdev_nvme_set_options", 00:53:04.291 "params": { 00:53:04.291 "action_on_timeout": "none", 00:53:04.291 "allow_accel_sequence": false, 00:53:04.291 "arbitration_burst": 0, 00:53:04.291 "bdev_retry_count": 3, 00:53:04.291 "ctrlr_loss_timeout_sec": 0, 00:53:04.291 "delay_cmd_submit": true, 00:53:04.291 "dhchap_dhgroups": [ 00:53:04.291 "null", 00:53:04.291 "ffdhe2048", 00:53:04.291 "ffdhe3072", 00:53:04.291 "ffdhe4096", 00:53:04.291 "ffdhe6144", 00:53:04.291 "ffdhe8192" 00:53:04.291 ], 00:53:04.292 "dhchap_digests": [ 00:53:04.292 "sha256", 00:53:04.292 "sha384", 00:53:04.292 "sha512" 00:53:04.292 ], 00:53:04.292 "disable_auto_failback": false, 00:53:04.292 "fast_io_fail_timeout_sec": 0, 00:53:04.292 "generate_uuids": false, 00:53:04.292 "high_priority_weight": 0, 00:53:04.292 "io_path_stat": false, 00:53:04.292 "io_queue_requests": 0, 00:53:04.292 "keep_alive_timeout_ms": 10000, 00:53:04.292 "low_priority_weight": 0, 00:53:04.292 "medium_priority_weight": 0, 00:53:04.292 "nvme_adminq_poll_period_us": 10000, 00:53:04.292 "nvme_error_stat": false, 00:53:04.292 "nvme_ioq_poll_period_us": 0, 00:53:04.292 "rdma_cm_event_timeout_ms": 0, 00:53:04.292 "rdma_max_cq_size": 0, 00:53:04.292 "rdma_srq_size": 0, 00:53:04.292 "reconnect_delay_sec": 0, 00:53:04.292 "timeout_admin_us": 0, 00:53:04.292 "timeout_us": 0, 00:53:04.292 "transport_ack_timeout": 0, 00:53:04.292 "transport_retry_count": 4, 00:53:04.292 "transport_tos": 0 00:53:04.292 } 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "method": "bdev_nvme_set_hotplug", 00:53:04.292 "params": { 00:53:04.292 "enable": false, 00:53:04.292 "period_us": 100000 00:53:04.292 } 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "method": "bdev_malloc_create", 00:53:04.292 "params": { 00:53:04.292 "block_size": 4096, 00:53:04.292 "dif_is_head_of_md": false, 00:53:04.292 "dif_pi_format": 0, 00:53:04.292 "dif_type": 0, 00:53:04.292 "md_size": 0, 00:53:04.292 "name": "malloc0", 00:53:04.292 "num_blocks": 8192, 00:53:04.292 "optimal_io_boundary": 0, 00:53:04.292 "physical_block_size": 4096, 00:53:04.292 "uuid": "90688d0c-dca3-46c7-8985-72824d7a9dc4" 00:53:04.292 } 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "method": "bdev_wait_for_examine" 00:53:04.292 } 00:53:04.292 ] 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "subsystem": "nbd", 00:53:04.292 "config": [] 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "subsystem": "scheduler", 00:53:04.292 "config": [ 00:53:04.292 { 00:53:04.292 "method": "framework_set_scheduler", 00:53:04.292 "params": { 00:53:04.292 "name": "static" 00:53:04.292 } 00:53:04.292 } 00:53:04.292 ] 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "subsystem": "nvmf", 00:53:04.292 "config": [ 00:53:04.292 { 00:53:04.292 "method": "nvmf_set_config", 00:53:04.292 "params": { 00:53:04.292 "admin_cmd_passthru": { 00:53:04.292 "identify_ctrlr": false 00:53:04.292 }, 00:53:04.292 "dhchap_dhgroups": [ 00:53:04.292 "null", 00:53:04.292 "ffdhe2048", 00:53:04.292 "ffdhe3072", 00:53:04.292 "ffdhe4096", 00:53:04.292 "ffdhe6144", 00:53:04.292 "ffdhe8192" 00:53:04.292 ], 00:53:04.292 "dhchap_digests": [ 00:53:04.292 "sha256", 00:53:04.292 "sha384", 00:53:04.292 "sha512" 00:53:04.292 ], 00:53:04.292 "discovery_filter": "match_any" 00:53:04.292 } 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "method": "nvmf_set_max_subsystems", 00:53:04.292 "params": { 00:53:04.292 "max_subsystems": 1024 00:53:04.292 } 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "method": "nvmf_set_crdt", 00:53:04.292 "params": { 00:53:04.292 "crdt1": 0, 00:53:04.292 "crdt2": 0, 00:53:04.292 "crdt3": 0 00:53:04.292 } 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "method": "nvmf_create_transport", 00:53:04.292 "params": { 00:53:04.292 "abort_timeout_sec": 1, 00:53:04.292 "ack_timeout": 0, 00:53:04.292 "buf_cache_size": 4294967295, 00:53:04.292 "c2h_success": false, 00:53:04.292 "data_wr_pool_size": 0, 00:53:04.292 "dif_insert_or_strip": false, 00:53:04.292 "in_capsule_data_size": 4096, 00:53:04.292 "io_unit_size": 131072, 00:53:04.292 "max_aq_depth": 128, 00:53:04.292 "max_io_qpairs_per_ctrlr": 127, 00:53:04.292 "max_io_size": 131072, 00:53:04.292 "max_queue_depth": 128, 00:53:04.292 "num_shared_buffers": 511, 00:53:04.292 "sock_priority": 0, 00:53:04.292 "trtype": "TCP", 00:53:04.292 "zcopy": false 00:53:04.292 } 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "method": "nvmf_create_subsystem", 00:53:04.292 "params": { 00:53:04.292 "allow_any_host": false, 00:53:04.292 "ana_reporting": false, 00:53:04.292 "max_cntlid": 65519, 00:53:04.292 "max_namespaces": 32, 00:53:04.292 "min_cntlid": 1, 00:53:04.292 "model_number": "SPDK bdev Controller", 00:53:04.292 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:53:04.292 "serial_number": "00000000000000000000" 00:53:04.292 } 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "method": "nvmf_subsystem_add_host", 00:53:04.292 "params": { 00:53:04.292 "host": "nqn.2016-06.io.spdk:host1", 00:53:04.292 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:53:04.292 "psk": "key0" 00:53:04.292 } 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "method": "nvmf_subsystem_add_ns", 00:53:04.292 "params": { 00:53:04.292 "namespace": { 00:53:04.292 "bdev_name": "malloc0", 00:53:04.292 "nguid": "90688D0CDCA346C7898572824D7A9DC4", 00:53:04.292 "no_auto_visible": false, 00:53:04.292 "nsid": 1, 00:53:04.292 "uuid": "90688d0c-dca3-46c7-8985-72824d7a9dc4" 00:53:04.292 }, 00:53:04.292 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:53:04.292 } 00:53:04.292 }, 00:53:04.292 { 00:53:04.292 "method": "nvmf_subsystem_add_listener", 00:53:04.292 "params": { 00:53:04.292 "listen_address": { 00:53:04.292 "adrfam": "IPv4", 00:53:04.292 "traddr": "10.0.0.3", 00:53:04.292 "trsvcid": "4420", 00:53:04.292 "trtype": "TCP" 00:53:04.292 }, 00:53:04.292 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:53:04.292 "secure_channel": false, 00:53:04.292 "sock_impl": "ssl" 00:53:04.292 } 00:53:04.292 } 00:53:04.292 ] 00:53:04.292 } 00:53:04.292 ] 00:53:04.292 }' 00:53:04.292 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85057 00:53:04.292 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85057 00:53:04.292 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85057 ']' 00:53:04.292 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:04.292 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:04.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:04.292 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:04.292 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:04.292 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:53:04.292 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:04.292 [2024-12-09 10:04:11.154600] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:04.292 [2024-12-09 10:04:11.154681] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:04.292 [2024-12-09 10:04:11.286849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:04.552 [2024-12-09 10:04:11.336290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:04.552 [2024-12-09 10:04:11.336336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:04.552 [2024-12-09 10:04:11.336344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:04.552 [2024-12-09 10:04:11.336349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:04.552 [2024-12-09 10:04:11.336355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:04.552 [2024-12-09 10:04:11.336683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:04.552 [2024-12-09 10:04:11.553637] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:04.552 [2024-12-09 10:04:11.585564] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:53:04.552 [2024-12-09 10:04:11.585799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:05.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=85101 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 85101 /var/tmp/bdevperf.sock 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85101 ']' 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:53:05.121 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:53:05.121 "subsystems": [ 00:53:05.121 { 00:53:05.121 "subsystem": "keyring", 00:53:05.121 "config": [ 00:53:05.121 { 00:53:05.121 "method": "keyring_file_add_key", 00:53:05.121 "params": { 00:53:05.121 "name": "key0", 00:53:05.121 "path": "/tmp/tmp.h8ud1MAmqT" 00:53:05.121 } 00:53:05.121 } 00:53:05.121 ] 00:53:05.121 }, 00:53:05.121 { 00:53:05.121 "subsystem": "iobuf", 00:53:05.121 "config": [ 00:53:05.121 { 00:53:05.121 "method": "iobuf_set_options", 00:53:05.121 "params": { 00:53:05.121 "enable_numa": false, 00:53:05.121 "large_bufsize": 135168, 00:53:05.121 "large_pool_count": 1024, 00:53:05.121 "small_bufsize": 8192, 00:53:05.121 "small_pool_count": 8192 00:53:05.121 } 00:53:05.121 } 00:53:05.121 ] 00:53:05.121 }, 00:53:05.121 { 00:53:05.121 "subsystem": "sock", 00:53:05.121 "config": [ 00:53:05.121 { 00:53:05.121 "method": "sock_set_default_impl", 00:53:05.121 "params": { 00:53:05.121 "impl_name": "posix" 00:53:05.121 } 00:53:05.121 }, 00:53:05.121 { 00:53:05.121 "method": "sock_impl_set_options", 00:53:05.121 "params": { 00:53:05.121 "enable_ktls": false, 00:53:05.121 "enable_placement_id": 0, 00:53:05.121 "enable_quickack": false, 00:53:05.121 "enable_recv_pipe": true, 00:53:05.121 "enable_zerocopy_send_client": false, 00:53:05.121 "enable_zerocopy_send_server": true, 00:53:05.121 "impl_name": "ssl", 00:53:05.121 "recv_buf_size": 4096, 00:53:05.121 "send_buf_size": 4096, 00:53:05.121 "tls_version": 0, 00:53:05.121 "zerocopy_threshold": 0 00:53:05.121 } 00:53:05.121 }, 00:53:05.121 { 00:53:05.121 "method": "sock_impl_set_options", 00:53:05.121 "params": { 00:53:05.121 "enable_ktls": false, 00:53:05.121 "enable_placement_id": 0, 00:53:05.121 "enable_quickack": false, 00:53:05.121 "enable_recv_pipe": true, 00:53:05.121 "enable_zerocopy_send_client": false, 00:53:05.121 "enable_zerocopy_send_server": true, 00:53:05.121 "impl_name": "posix", 00:53:05.121 "recv_buf_size": 2097152, 00:53:05.121 "send_buf_size": 2097152, 00:53:05.121 "tls_version": 0, 00:53:05.121 "zerocopy_threshold": 0 00:53:05.121 } 00:53:05.121 } 00:53:05.121 ] 00:53:05.121 }, 00:53:05.121 { 00:53:05.121 "subsystem": "vmd", 00:53:05.121 "config": [] 00:53:05.121 }, 00:53:05.121 { 00:53:05.121 "subsystem": "accel", 00:53:05.121 "config": [ 00:53:05.121 { 00:53:05.121 "method": "accel_set_options", 00:53:05.121 "params": { 00:53:05.121 "buf_count": 2048, 00:53:05.121 "large_cache_size": 16, 00:53:05.121 "sequence_count": 2048, 00:53:05.121 "small_cache_size": 128, 00:53:05.121 "task_count": 2048 00:53:05.121 } 00:53:05.121 } 00:53:05.121 ] 00:53:05.121 }, 00:53:05.121 { 00:53:05.121 "subsystem": "bdev", 00:53:05.121 "config": [ 00:53:05.121 { 00:53:05.121 "method": "bdev_set_options", 00:53:05.121 "params": { 00:53:05.121 "bdev_auto_examine": true, 00:53:05.121 "bdev_io_cache_size": 256, 00:53:05.121 "bdev_io_pool_size": 65535, 00:53:05.122 "iobuf_large_cache_size": 16, 00:53:05.122 "iobuf_small_cache_size": 128 00:53:05.122 } 00:53:05.122 }, 00:53:05.122 { 00:53:05.122 "method": "bdev_raid_set_options", 00:53:05.122 "params": { 00:53:05.122 "process_max_bandwidth_mb_sec": 0, 00:53:05.122 "process_window_size_kb": 1024 00:53:05.122 } 00:53:05.122 }, 00:53:05.122 { 00:53:05.122 "method": "bdev_iscsi_set_options", 00:53:05.122 "params": { 00:53:05.122 "timeout_sec": 30 00:53:05.122 } 00:53:05.122 }, 00:53:05.122 { 00:53:05.122 "method": "bdev_nvme_set_options", 00:53:05.122 "params": { 00:53:05.122 "action_on_timeout": "none", 00:53:05.122 "allow_accel_sequence": false, 00:53:05.122 "arbitration_burst": 0, 00:53:05.122 "bdev_retry_count": 3, 00:53:05.122 "ctrlr_loss_timeout_sec": 0, 00:53:05.122 "delay_cmd_submit": true, 00:53:05.122 "dhchap_dhgroups": [ 00:53:05.122 "null", 00:53:05.122 "ffdhe2048", 00:53:05.122 "ffdhe3072", 00:53:05.122 "ffdhe4096", 00:53:05.122 "ffdhe6144", 00:53:05.122 "ffdhe8192" 00:53:05.122 ], 00:53:05.122 "dhchap_digests": [ 00:53:05.122 "sha256", 00:53:05.122 "sha384", 00:53:05.122 "sha512" 00:53:05.122 ], 00:53:05.122 "disable_auto_failback": false, 00:53:05.122 "fast_io_fail_timeout_sec": 0, 00:53:05.122 "generate_uuids": false, 00:53:05.122 "high_priority_weight": 0, 00:53:05.122 "io_path_stat": false, 00:53:05.122 "io_queue_requests": 512, 00:53:05.122 "keep_alive_timeout_ms": 10000, 00:53:05.122 "low_priority_weight": 0, 00:53:05.122 "medium_priority_weight": 0, 00:53:05.122 "nvme_adminq_poll_period_us": 10000, 00:53:05.122 "nvme_error_stat": false, 00:53:05.122 "nvme_ioq_poll_period_us": 0, 00:53:05.122 "rdma_cm_event_timeout_ms": 0, 00:53:05.122 "rdma_max_cq_size": 0, 00:53:05.122 "rdma_srq_size": 0, 00:53:05.122 "reconnect_delay_sec": 0, 00:53:05.122 "timeout_admin_us": 0, 00:53:05.122 "timeout_us": 0, 00:53:05.122 "transport_ack_timeout": 0, 00:53:05.122 "transport_retry_count": 4, 00:53:05.122 "transport_tos": 0 00:53:05.122 } 00:53:05.122 }, 00:53:05.122 { 00:53:05.122 "method": "bdev_nvme_attach_controller", 00:53:05.122 "params": { 00:53:05.122 "adrfam": "IPv4", 00:53:05.122 "ctrlr_loss_timeout_sec": 0, 00:53:05.122 "ddgst": false, 00:53:05.122 "fast_io_fail_timeout_sec": 0, 00:53:05.122 "hdgst": false, 00:53:05.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:53:05.122 "multipath": "multipath", 00:53:05.122 "name": "nvme0", 00:53:05.122 "prchk_guard": false, 00:53:05.122 "prchk_reftag": false, 00:53:05.122 "psk": "key0", 00:53:05.122 "reconnect_delay_sec": 0, 00:53:05.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:53:05.122 "traddr": "10.0.0.3", 00:53:05.122 "trsvcid": "4420", 00:53:05.122 "trtype": "TCP" 00:53:05.122 } 00:53:05.122 }, 00:53:05.122 { 00:53:05.122 "method": "bdev_nvme_set_hotplug", 00:53:05.122 "params": { 00:53:05.122 "enable": false, 00:53:05.122 "period_us": 100000 00:53:05.122 } 00:53:05.122 }, 00:53:05.122 { 00:53:05.122 "method": "bdev_enable_histogram", 00:53:05.122 "params": { 00:53:05.122 "enable": true, 00:53:05.122 "name": "nvme0n1" 00:53:05.122 } 00:53:05.122 }, 00:53:05.122 { 00:53:05.122 "method": "bdev_wait_for_examine" 00:53:05.122 } 00:53:05.122 ] 00:53:05.122 }, 00:53:05.122 { 00:53:05.122 "subsystem": "nbd", 00:53:05.122 "config": [] 00:53:05.122 } 00:53:05.122 ] 00:53:05.122 }' 00:53:05.380 [2024-12-09 10:04:12.212297] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:05.380 [2024-12-09 10:04:12.212534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85101 ] 00:53:05.380 [2024-12-09 10:04:12.362704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:05.380 [2024-12-09 10:04:12.419515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:05.638 [2024-12-09 10:04:12.578428] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:53:06.205 10:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:06.205 10:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:53:06.205 10:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:53:06.205 10:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:53:06.463 10:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:53:06.463 10:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:53:06.463 Running I/O for 1 seconds... 00:53:07.839 5413.00 IOPS, 21.14 MiB/s 00:53:07.839 Latency(us) 00:53:07.839 [2024-12-09T10:04:14.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:07.839 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:53:07.839 Verification LBA range: start 0x0 length 0x2000 00:53:07.839 nvme0n1 : 1.01 5467.11 21.36 0.00 0.00 23227.98 5380.25 20948.63 00:53:07.839 [2024-12-09T10:04:14.883Z] =================================================================================================================== 00:53:07.839 [2024-12-09T10:04:14.883Z] Total : 5467.11 21.36 0.00 0.00 23227.98 5380.25 20948.63 00:53:07.839 { 00:53:07.839 "results": [ 00:53:07.839 { 00:53:07.839 "job": "nvme0n1", 00:53:07.839 "core_mask": "0x2", 00:53:07.839 "workload": "verify", 00:53:07.839 "status": "finished", 00:53:07.839 "verify_range": { 00:53:07.839 "start": 0, 00:53:07.839 "length": 8192 00:53:07.839 }, 00:53:07.839 "queue_depth": 128, 00:53:07.839 "io_size": 4096, 00:53:07.839 "runtime": 1.013516, 00:53:07.839 "iops": 5467.106587365172, 00:53:07.839 "mibps": 21.355885106895204, 00:53:07.839 "io_failed": 0, 00:53:07.839 "io_timeout": 0, 00:53:07.839 "avg_latency_us": 23227.97725064998, 00:53:07.839 "min_latency_us": 5380.248034934498, 00:53:07.839 "max_latency_us": 20948.625327510916 00:53:07.839 } 00:53:07.839 ], 00:53:07.839 "core_count": 1 00:53:07.839 } 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:53:07.839 nvmf_trace.0 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 85101 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85101 ']' 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85101 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85101 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:07.839 killing process with pid 85101 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85101' 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85101 00:53:07.839 Received shutdown signal, test time was about 1.000000 seconds 00:53:07.839 00:53:07.839 Latency(us) 00:53:07.839 [2024-12-09T10:04:14.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:07.839 [2024-12-09T10:04:14.883Z] =================================================================================================================== 00:53:07.839 [2024-12-09T10:04:14.883Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85101 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:07.839 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:08.099 rmmod nvme_tcp 00:53:08.099 rmmod nvme_fabrics 00:53:08.099 rmmod nvme_keyring 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 85057 ']' 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 85057 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85057 ']' 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85057 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85057 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:08.099 killing process with pid 85057 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85057' 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85057 00:53:08.099 10:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85057 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:08.358 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.XVieGvszS2 /tmp/tmp.n53ckjTSbO /tmp/tmp.h8ud1MAmqT 00:53:08.618 00:53:08.618 real 1m27.273s 00:53:08.618 user 2m18.995s 00:53:08.618 sys 0m28.745s 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:53:08.618 ************************************ 00:53:08.618 END TEST nvmf_tls 00:53:08.618 ************************************ 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:53:08.618 ************************************ 00:53:08.618 START TEST nvmf_fips 00:53:08.618 ************************************ 00:53:08.618 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:53:08.898 * Looking for test storage... 00:53:08.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:08.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.898 --rc genhtml_branch_coverage=1 00:53:08.898 --rc genhtml_function_coverage=1 00:53:08.898 --rc genhtml_legend=1 00:53:08.898 --rc geninfo_all_blocks=1 00:53:08.898 --rc geninfo_unexecuted_blocks=1 00:53:08.898 00:53:08.898 ' 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:08.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.898 --rc genhtml_branch_coverage=1 00:53:08.898 --rc genhtml_function_coverage=1 00:53:08.898 --rc genhtml_legend=1 00:53:08.898 --rc geninfo_all_blocks=1 00:53:08.898 --rc geninfo_unexecuted_blocks=1 00:53:08.898 00:53:08.898 ' 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:08.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.898 --rc genhtml_branch_coverage=1 00:53:08.898 --rc genhtml_function_coverage=1 00:53:08.898 --rc genhtml_legend=1 00:53:08.898 --rc geninfo_all_blocks=1 00:53:08.898 --rc geninfo_unexecuted_blocks=1 00:53:08.898 00:53:08.898 ' 00:53:08.898 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:08.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.898 --rc genhtml_branch_coverage=1 00:53:08.899 --rc genhtml_function_coverage=1 00:53:08.899 --rc genhtml_legend=1 00:53:08.899 --rc geninfo_all_blocks=1 00:53:08.899 --rc geninfo_unexecuted_blocks=1 00:53:08.899 00:53:08.899 ' 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:08.899 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:53:08.899 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:53:09.159 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:53:09.159 Error setting digest 00:53:09.159 4072C1516D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:53:09.159 4072C1516D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:09.159 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:09.160 Cannot find device "nvmf_init_br" 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:09.160 Cannot find device "nvmf_init_br2" 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:09.160 Cannot find device "nvmf_tgt_br" 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:09.160 Cannot find device "nvmf_tgt_br2" 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:09.160 Cannot find device "nvmf_init_br" 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:09.160 Cannot find device "nvmf_init_br2" 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:09.160 Cannot find device "nvmf_tgt_br" 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:09.160 Cannot find device "nvmf_tgt_br2" 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:53:09.160 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:09.420 Cannot find device "nvmf_br" 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:09.420 Cannot find device "nvmf_init_if" 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:09.420 Cannot find device "nvmf_init_if2" 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:09.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:09.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:09.420 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:09.681 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:09.681 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:53:09.681 00:53:09.681 --- 10.0.0.3 ping statistics --- 00:53:09.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:09.681 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:09.681 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:09.681 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:53:09.681 00:53:09.681 --- 10.0.0.4 ping statistics --- 00:53:09.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:09.681 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:09.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:09.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:53:09.681 00:53:09.681 --- 10.0.0.1 ping statistics --- 00:53:09.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:09.681 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:09.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:09.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:53:09.681 00:53:09.681 --- 10.0.0.2 ping statistics --- 00:53:09.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:09.681 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=85441 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 85441 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85441 ']' 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:09.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:09.681 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:53:09.681 [2024-12-09 10:04:16.644092] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:09.682 [2024-12-09 10:04:16.644170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:09.941 [2024-12-09 10:04:16.791940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:09.941 [2024-12-09 10:04:16.846130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:09.941 [2024-12-09 10:04:16.846188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:09.941 [2024-12-09 10:04:16.846194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:09.941 [2024-12-09 10:04:16.846199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:09.941 [2024-12-09 10:04:16.846203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:09.941 [2024-12-09 10:04:16.846485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:10.525 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:10.525 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:53:10.525 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:10.525 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:10.525 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:53:10.525 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:10.525 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:53:10.525 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:53:10.785 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:53:10.785 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.2ZC 00:53:10.785 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:53:10.785 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.2ZC 00:53:10.785 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.2ZC 00:53:10.785 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.2ZC 00:53:10.785 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:53:10.785 [2024-12-09 10:04:17.772574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:10.785 [2024-12-09 10:04:17.788521] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:53:10.785 [2024-12-09 10:04:17.788719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:11.045 malloc0 00:53:11.045 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:53:11.045 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85495 00:53:11.045 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:53:11.045 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85495 /var/tmp/bdevperf.sock 00:53:11.045 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85495 ']' 00:53:11.045 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:53:11.045 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:11.045 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:53:11.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:53:11.045 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:11.045 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:53:11.045 [2024-12-09 10:04:17.935501] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:11.045 [2024-12-09 10:04:17.935577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85495 ] 00:53:11.045 [2024-12-09 10:04:18.085014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:11.304 [2024-12-09 10:04:18.136512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:11.873 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:11.873 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:53:11.873 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.2ZC 00:53:12.133 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:53:12.392 [2024-12-09 10:04:19.247024] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:53:12.392 TLSTESTn1 00:53:12.392 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:53:12.652 Running I/O for 10 seconds... 00:53:14.544 6404.00 IOPS, 25.02 MiB/s [2024-12-09T10:04:22.568Z] 6436.50 IOPS, 25.14 MiB/s [2024-12-09T10:04:23.506Z] 6469.33 IOPS, 25.27 MiB/s [2024-12-09T10:04:24.884Z] 6479.00 IOPS, 25.31 MiB/s [2024-12-09T10:04:25.451Z] 6460.00 IOPS, 25.23 MiB/s [2024-12-09T10:04:26.831Z] 6431.67 IOPS, 25.12 MiB/s [2024-12-09T10:04:27.778Z] 6389.86 IOPS, 24.96 MiB/s [2024-12-09T10:04:28.714Z] 6335.50 IOPS, 24.75 MiB/s [2024-12-09T10:04:29.651Z] 6311.78 IOPS, 24.66 MiB/s [2024-12-09T10:04:29.651Z] 6282.00 IOPS, 24.54 MiB/s 00:53:22.607 Latency(us) 00:53:22.607 [2024-12-09T10:04:29.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:22.607 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:53:22.607 Verification LBA range: start 0x0 length 0x2000 00:53:22.607 TLSTESTn1 : 10.01 6287.92 24.56 0.00 0.00 20323.81 4035.19 16255.22 00:53:22.607 [2024-12-09T10:04:29.651Z] =================================================================================================================== 00:53:22.607 [2024-12-09T10:04:29.651Z] Total : 6287.92 24.56 0.00 0.00 20323.81 4035.19 16255.22 00:53:22.607 { 00:53:22.607 "results": [ 00:53:22.607 { 00:53:22.607 "job": "TLSTESTn1", 00:53:22.607 "core_mask": "0x4", 00:53:22.607 "workload": "verify", 00:53:22.607 "status": "finished", 00:53:22.607 "verify_range": { 00:53:22.607 "start": 0, 00:53:22.607 "length": 8192 00:53:22.607 }, 00:53:22.607 "queue_depth": 128, 00:53:22.607 "io_size": 4096, 00:53:22.607 "runtime": 10.010788, 00:53:22.607 "iops": 6287.916595576692, 00:53:22.607 "mibps": 24.562174201471453, 00:53:22.607 "io_failed": 0, 00:53:22.607 "io_timeout": 0, 00:53:22.607 "avg_latency_us": 20323.807166866587, 00:53:22.607 "min_latency_us": 4035.1860262008736, 00:53:22.608 "max_latency_us": 16255.217467248909 00:53:22.608 } 00:53:22.608 ], 00:53:22.608 "core_count": 1 00:53:22.608 } 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:53:22.608 nvmf_trace.0 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85495 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85495 ']' 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85495 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85495 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:53:22.608 killing process with pid 85495 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85495' 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85495 00:53:22.608 Received shutdown signal, test time was about 10.000000 seconds 00:53:22.608 00:53:22.608 Latency(us) 00:53:22.608 [2024-12-09T10:04:29.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:22.608 [2024-12-09T10:04:29.652Z] =================================================================================================================== 00:53:22.608 [2024-12-09T10:04:29.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:22.608 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85495 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:22.868 rmmod nvme_tcp 00:53:22.868 rmmod nvme_fabrics 00:53:22.868 rmmod nvme_keyring 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 85441 ']' 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 85441 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85441 ']' 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85441 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:22.868 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85441 00:53:23.131 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:23.131 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:23.131 killing process with pid 85441 00:53:23.131 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85441' 00:53:23.131 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85441 00:53:23.131 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85441 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:23.390 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.2ZC 00:53:23.649 00:53:23.649 real 0m14.931s 00:53:23.649 user 0m20.778s 00:53:23.649 sys 0m5.477s 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:53:23.649 ************************************ 00:53:23.649 END TEST nvmf_fips 00:53:23.649 ************************************ 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:53:23.649 ************************************ 00:53:23.649 START TEST nvmf_control_msg_list 00:53:23.649 ************************************ 00:53:23.649 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:53:23.910 * Looking for test storage... 00:53:23.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:23.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:23.910 --rc genhtml_branch_coverage=1 00:53:23.910 --rc genhtml_function_coverage=1 00:53:23.910 --rc genhtml_legend=1 00:53:23.910 --rc geninfo_all_blocks=1 00:53:23.910 --rc geninfo_unexecuted_blocks=1 00:53:23.910 00:53:23.910 ' 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:23.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:23.910 --rc genhtml_branch_coverage=1 00:53:23.910 --rc genhtml_function_coverage=1 00:53:23.910 --rc genhtml_legend=1 00:53:23.910 --rc geninfo_all_blocks=1 00:53:23.910 --rc geninfo_unexecuted_blocks=1 00:53:23.910 00:53:23.910 ' 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:23.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:23.910 --rc genhtml_branch_coverage=1 00:53:23.910 --rc genhtml_function_coverage=1 00:53:23.910 --rc genhtml_legend=1 00:53:23.910 --rc geninfo_all_blocks=1 00:53:23.910 --rc geninfo_unexecuted_blocks=1 00:53:23.910 00:53:23.910 ' 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:23.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:23.910 --rc genhtml_branch_coverage=1 00:53:23.910 --rc genhtml_function_coverage=1 00:53:23.910 --rc genhtml_legend=1 00:53:23.910 --rc geninfo_all_blocks=1 00:53:23.910 --rc geninfo_unexecuted_blocks=1 00:53:23.910 00:53:23.910 ' 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:23.910 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:23.911 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:23.911 Cannot find device "nvmf_init_br" 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:23.911 Cannot find device "nvmf_init_br2" 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:23.911 Cannot find device "nvmf_tgt_br" 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:23.911 Cannot find device "nvmf_tgt_br2" 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:23.911 Cannot find device "nvmf_init_br" 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:53:23.911 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:24.171 Cannot find device "nvmf_init_br2" 00:53:24.171 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:53:24.171 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:24.171 Cannot find device "nvmf_tgt_br" 00:53:24.171 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:53:24.171 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:24.171 Cannot find device "nvmf_tgt_br2" 00:53:24.171 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:53:24.171 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:24.171 Cannot find device "nvmf_br" 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:24.171 Cannot find device "nvmf_init_if" 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:24.171 Cannot find device "nvmf_init_if2" 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:24.171 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:24.171 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:24.171 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:24.429 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:24.429 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:24.429 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:24.429 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:24.429 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:24.429 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:24.429 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:24.429 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:24.429 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:24.429 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:53:24.429 00:53:24.429 --- 10.0.0.3 ping statistics --- 00:53:24.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:24.429 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:53:24.429 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:24.429 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:24.429 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.111 ms 00:53:24.429 00:53:24.429 --- 10.0.0.4 ping statistics --- 00:53:24.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:24.430 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:24.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:24.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:53:24.430 00:53:24.430 --- 10.0.0.1 ping statistics --- 00:53:24.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:24.430 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:24.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:24.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:53:24.430 00:53:24.430 --- 10.0.0.2 ping statistics --- 00:53:24.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:24.430 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=85908 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 85908 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 85908 ']' 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:24.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:24.430 10:04:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:53:24.430 [2024-12-09 10:04:31.366813] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:24.430 [2024-12-09 10:04:31.366876] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:24.689 [2024-12-09 10:04:31.516509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:24.689 [2024-12-09 10:04:31.593416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:24.689 [2024-12-09 10:04:31.593480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:24.689 [2024-12-09 10:04:31.593487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:24.689 [2024-12-09 10:04:31.593492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:24.689 [2024-12-09 10:04:31.593496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:24.689 [2024-12-09 10:04:31.593850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:53:25.348 [2024-12-09 10:04:32.345714] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:25.348 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:53:25.349 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:25.349 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:53:25.349 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:25.349 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:53:25.349 Malloc0 00:53:25.349 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:25.349 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:53:25.349 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:25.349 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:53:25.608 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:25.608 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:53:25.608 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:25.608 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:53:25.608 [2024-12-09 10:04:32.405088] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:25.608 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:25.608 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85958 00:53:25.608 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:53:25.608 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:53:25.608 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85959 00:53:25.608 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85960 00:53:25.608 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:53:25.608 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85958 00:53:25.608 [2024-12-09 10:04:32.595052] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:53:25.608 [2024-12-09 10:04:32.605128] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:53:25.608 [2024-12-09 10:04:32.605517] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:53:26.988 Initializing NVMe Controllers 00:53:26.989 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:53:26.989 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:53:26.989 Initialization complete. Launching workers. 00:53:26.989 ======================================================== 00:53:26.989 Latency(us) 00:53:26.989 Device Information : IOPS MiB/s Average min max 00:53:26.989 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4627.00 18.07 215.92 92.83 494.36 00:53:26.989 ======================================================== 00:53:26.989 Total : 4627.00 18.07 215.92 92.83 494.36 00:53:26.989 00:53:26.989 Initializing NVMe Controllers 00:53:26.989 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:53:26.989 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:53:26.989 Initialization complete. Launching workers. 00:53:26.989 ======================================================== 00:53:26.989 Latency(us) 00:53:26.989 Device Information : IOPS MiB/s Average min max 00:53:26.989 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4593.00 17.94 217.47 112.06 431.48 00:53:26.989 ======================================================== 00:53:26.989 Total : 4593.00 17.94 217.47 112.06 431.48 00:53:26.989 00:53:26.989 Initializing NVMe Controllers 00:53:26.989 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:53:26.989 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:53:26.989 Initialization complete. Launching workers. 00:53:26.989 ======================================================== 00:53:26.989 Latency(us) 00:53:26.989 Device Information : IOPS MiB/s Average min max 00:53:26.989 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4580.00 17.89 218.11 95.38 392.05 00:53:26.989 ======================================================== 00:53:26.989 Total : 4580.00 17.89 218.11 95.38 392.05 00:53:26.989 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85959 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85960 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:26.989 rmmod nvme_tcp 00:53:26.989 rmmod nvme_fabrics 00:53:26.989 rmmod nvme_keyring 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 85908 ']' 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 85908 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 85908 ']' 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 85908 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85908 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85908' 00:53:26.989 killing process with pid 85908 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 85908 00:53:26.989 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 85908 00:53:27.249 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:27.249 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:27.249 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:27.249 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:53:27.249 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:53:27.249 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:53:27.249 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:27.249 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:27.249 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:27.249 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:27.249 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:27.249 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:27.250 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:27.250 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:27.250 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:27.250 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:27.250 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:27.250 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:27.250 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:27.250 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:27.250 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:27.250 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:53:27.510 00:53:27.510 real 0m3.740s 00:53:27.510 user 0m5.498s 00:53:27.510 sys 0m1.617s 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:53:27.510 ************************************ 00:53:27.510 END TEST nvmf_control_msg_list 00:53:27.510 ************************************ 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:53:27.510 ************************************ 00:53:27.510 START TEST nvmf_wait_for_buf 00:53:27.510 ************************************ 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:53:27.510 * Looking for test storage... 00:53:27.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:27.510 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:27.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:27.771 --rc genhtml_branch_coverage=1 00:53:27.771 --rc genhtml_function_coverage=1 00:53:27.771 --rc genhtml_legend=1 00:53:27.771 --rc geninfo_all_blocks=1 00:53:27.771 --rc geninfo_unexecuted_blocks=1 00:53:27.771 00:53:27.771 ' 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:27.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:27.771 --rc genhtml_branch_coverage=1 00:53:27.771 --rc genhtml_function_coverage=1 00:53:27.771 --rc genhtml_legend=1 00:53:27.771 --rc geninfo_all_blocks=1 00:53:27.771 --rc geninfo_unexecuted_blocks=1 00:53:27.771 00:53:27.771 ' 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:27.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:27.771 --rc genhtml_branch_coverage=1 00:53:27.771 --rc genhtml_function_coverage=1 00:53:27.771 --rc genhtml_legend=1 00:53:27.771 --rc geninfo_all_blocks=1 00:53:27.771 --rc geninfo_unexecuted_blocks=1 00:53:27.771 00:53:27.771 ' 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:27.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:27.771 --rc genhtml_branch_coverage=1 00:53:27.771 --rc genhtml_function_coverage=1 00:53:27.771 --rc genhtml_legend=1 00:53:27.771 --rc geninfo_all_blocks=1 00:53:27.771 --rc geninfo_unexecuted_blocks=1 00:53:27.771 00:53:27.771 ' 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:27.771 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:27.772 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:27.772 Cannot find device "nvmf_init_br" 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:27.772 Cannot find device "nvmf_init_br2" 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:27.772 Cannot find device "nvmf_tgt_br" 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:27.772 Cannot find device "nvmf_tgt_br2" 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:27.772 Cannot find device "nvmf_init_br" 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:27.772 Cannot find device "nvmf_init_br2" 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:27.772 Cannot find device "nvmf_tgt_br" 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:53:27.772 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:28.032 Cannot find device "nvmf_tgt_br2" 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:28.032 Cannot find device "nvmf_br" 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:28.032 Cannot find device "nvmf_init_if" 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:28.032 Cannot find device "nvmf_init_if2" 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:28.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:28.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:28.032 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:28.032 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:28.291 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:28.291 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:53:28.291 00:53:28.291 --- 10.0.0.3 ping statistics --- 00:53:28.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:28.291 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:28.291 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:28.291 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.116 ms 00:53:28.291 00:53:28.291 --- 10.0.0.4 ping statistics --- 00:53:28.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:28.291 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:28.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:28.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:53:28.291 00:53:28.291 --- 10.0.0.1 ping statistics --- 00:53:28.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:28.291 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:28.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:28.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:53:28.291 00:53:28.291 --- 10.0.0.2 ping statistics --- 00:53:28.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:28.291 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=86205 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 86205 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 86205 ']' 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:28.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:28.291 10:04:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:28.291 [2024-12-09 10:04:35.233072] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:28.291 [2024-12-09 10:04:35.233154] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:28.550 [2024-12-09 10:04:35.384907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:28.550 [2024-12-09 10:04:35.460622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:28.550 [2024-12-09 10:04:35.460676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:28.550 [2024-12-09 10:04:35.460682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:28.550 [2024-12-09 10:04:35.460688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:28.550 [2024-12-09 10:04:35.460693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:28.550 [2024-12-09 10:04:35.461034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:29.486 Malloc0 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:29.486 [2024-12-09 10:04:36.410962] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:29.486 [2024-12-09 10:04:36.447020] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:29.486 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:53:29.745 [2024-12-09 10:04:36.643623] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:53:31.139 Initializing NVMe Controllers 00:53:31.139 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:53:31.139 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:53:31.139 Initialization complete. Launching workers. 00:53:31.139 ======================================================== 00:53:31.139 Latency(us) 00:53:31.140 Device Information : IOPS MiB/s Average min max 00:53:31.140 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.00 15.88 32794.25 8053.84 65160.36 00:53:31.140 ======================================================== 00:53:31.140 Total : 127.00 15.88 32794.25 8053.84 65160.36 00:53:31.140 00:53:31.140 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:53:31.140 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:53:31.140 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:31.140 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2006 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2006 -eq 0 ]] 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:31.140 rmmod nvme_tcp 00:53:31.140 rmmod nvme_fabrics 00:53:31.140 rmmod nvme_keyring 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 86205 ']' 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 86205 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 86205 ']' 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 86205 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:31.140 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86205 00:53:31.399 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:31.399 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:31.399 killing process with pid 86205 00:53:31.399 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86205' 00:53:31.399 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 86205 00:53:31.399 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 86205 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:31.658 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:53:31.917 00:53:31.917 real 0m4.374s 00:53:31.917 user 0m3.706s 00:53:31.917 sys 0m1.022s 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:53:31.917 ************************************ 00:53:31.917 END TEST nvmf_wait_for_buf 00:53:31.917 ************************************ 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:53:31.917 ************************************ 00:53:31.917 START TEST nvmf_nsid 00:53:31.917 ************************************ 00:53:31.917 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:53:32.178 * Looking for test storage... 00:53:32.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:53:32.178 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:32.178 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:53:32.178 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:32.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:32.178 --rc genhtml_branch_coverage=1 00:53:32.178 --rc genhtml_function_coverage=1 00:53:32.178 --rc genhtml_legend=1 00:53:32.178 --rc geninfo_all_blocks=1 00:53:32.178 --rc geninfo_unexecuted_blocks=1 00:53:32.178 00:53:32.178 ' 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:32.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:32.178 --rc genhtml_branch_coverage=1 00:53:32.178 --rc genhtml_function_coverage=1 00:53:32.178 --rc genhtml_legend=1 00:53:32.178 --rc geninfo_all_blocks=1 00:53:32.178 --rc geninfo_unexecuted_blocks=1 00:53:32.178 00:53:32.178 ' 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:32.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:32.178 --rc genhtml_branch_coverage=1 00:53:32.178 --rc genhtml_function_coverage=1 00:53:32.178 --rc genhtml_legend=1 00:53:32.178 --rc geninfo_all_blocks=1 00:53:32.178 --rc geninfo_unexecuted_blocks=1 00:53:32.178 00:53:32.178 ' 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:32.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:32.178 --rc genhtml_branch_coverage=1 00:53:32.178 --rc genhtml_function_coverage=1 00:53:32.178 --rc genhtml_legend=1 00:53:32.178 --rc geninfo_all_blocks=1 00:53:32.178 --rc geninfo_unexecuted_blocks=1 00:53:32.178 00:53:32.178 ' 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:32.178 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:32.179 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:32.179 Cannot find device "nvmf_init_br" 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:32.179 Cannot find device "nvmf_init_br2" 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:32.179 Cannot find device "nvmf_tgt_br" 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:32.179 Cannot find device "nvmf_tgt_br2" 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:32.179 Cannot find device "nvmf_init_br" 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:53:32.179 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:32.439 Cannot find device "nvmf_init_br2" 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:32.439 Cannot find device "nvmf_tgt_br" 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:32.439 Cannot find device "nvmf_tgt_br2" 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:32.439 Cannot find device "nvmf_br" 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:32.439 Cannot find device "nvmf_init_if" 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:32.439 Cannot find device "nvmf_init_if2" 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:32.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:32.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:32.439 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:32.700 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:32.700 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:53:32.700 00:53:32.700 --- 10.0.0.3 ping statistics --- 00:53:32.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:32.700 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:32.700 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:32.700 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:53:32.700 00:53:32.700 --- 10.0.0.4 ping statistics --- 00:53:32.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:32.700 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:32.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:32.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:53:32.700 00:53:32.700 --- 10.0.0.1 ping statistics --- 00:53:32.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:32.700 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:32.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:32.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:53:32.700 00:53:32.700 --- 10.0.0.2 ping statistics --- 00:53:32.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:32.700 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=86500 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 86500 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 86500 ']' 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:32.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:32.700 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:53:32.700 [2024-12-09 10:04:39.634261] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:32.700 [2024-12-09 10:04:39.634343] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:32.960 [2024-12-09 10:04:39.783173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:32.960 [2024-12-09 10:04:39.860112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:32.960 [2024-12-09 10:04:39.860165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:32.960 [2024-12-09 10:04:39.860172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:32.960 [2024-12-09 10:04:39.860177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:32.960 [2024-12-09 10:04:39.860182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:32.960 [2024-12-09 10:04:39.860540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:33.527 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:33.527 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:53:33.527 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:33.527 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:33.527 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=86541 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ef7faf4e-79da-4b4e-97ad-fc2eccfc2f24 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=5d8e416c-0122-4c3c-b02b-f4645fbba65e 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=fdce8d97-e2ef-4b98-ac6d-965b8173e91d 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:53:33.787 null0 00:53:33.787 null1 00:53:33.787 null2 00:53:33.787 [2024-12-09 10:04:40.663239] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:33.787 [2024-12-09 10:04:40.672602] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:33.787 [2024-12-09 10:04:40.672684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86541 ] 00:53:33.787 [2024-12-09 10:04:40.687371] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 86541 /var/tmp/tgt2.sock 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 86541 ']' 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:33.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:33.787 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:53:33.787 [2024-12-09 10:04:40.821341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:34.046 [2024-12-09 10:04:40.878299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:34.306 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:34.306 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:53:34.306 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:53:34.565 [2024-12-09 10:04:41.477437] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:34.565 [2024-12-09 10:04:41.493856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:53:34.565 nvme0n1 nvme0n2 00:53:34.565 nvme1n1 00:53:34.565 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:53:34.565 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:53:34.565 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:53:34.825 10:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ef7faf4e-79da-4b4e-97ad-fc2eccfc2f24 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ef7faf4e79da4b4e97adfc2eccfc2f24 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EF7FAF4E79DA4B4E97ADFC2ECCFC2F24 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ EF7FAF4E79DA4B4E97ADFC2ECCFC2F24 == \E\F\7\F\A\F\4\E\7\9\D\A\4\B\4\E\9\7\A\D\F\C\2\E\C\C\F\C\2\F\2\4 ]] 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:53:35.761 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 5d8e416c-0122-4c3c-b02b-f4645fbba65e 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5d8e416c01224c3cb02bf4645fbba65e 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5D8E416C01224C3CB02BF4645FBBA65E 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 5D8E416C01224C3CB02BF4645FBBA65E == \5\D\8\E\4\1\6\C\0\1\2\2\4\C\3\C\B\0\2\B\F\4\6\4\5\F\B\B\A\6\5\E ]] 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid fdce8d97-e2ef-4b98-ac6d-965b8173e91d 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fdce8d97e2ef4b98ac6d965b8173e91d 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FDCE8D97E2EF4B98AC6D965B8173E91D 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ FDCE8D97E2EF4B98AC6D965B8173E91D == \F\D\C\E\8\D\9\7\E\2\E\F\4\B\9\8\A\C\6\D\9\6\5\B\8\1\7\3\E\9\1\D ]] 00:53:36.020 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:53:36.278 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:53:36.278 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:53:36.278 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 86541 00:53:36.278 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 86541 ']' 00:53:36.278 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 86541 00:53:36.278 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:53:36.278 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:36.279 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86541 00:53:36.279 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:36.279 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:36.279 killing process with pid 86541 00:53:36.279 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86541' 00:53:36.279 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 86541 00:53:36.279 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 86541 00:53:36.537 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:53:36.537 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:36.537 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:53:36.537 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:36.537 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:53:36.537 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:36.537 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:36.537 rmmod nvme_tcp 00:53:36.537 rmmod nvme_fabrics 00:53:36.537 rmmod nvme_keyring 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 86500 ']' 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 86500 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 86500 ']' 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 86500 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86500 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:36.796 killing process with pid 86500 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86500' 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 86500 00:53:36.796 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 86500 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:37.055 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:37.055 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:37.055 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:37.055 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:37.055 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:37.055 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:37.055 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:37.055 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:37.315 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:37.315 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:37.315 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:37.315 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:37.315 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:37.315 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:37.315 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:53:37.315 00:53:37.315 real 0m5.366s 00:53:37.315 user 0m7.750s 00:53:37.315 sys 0m1.546s 00:53:37.315 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:37.315 ************************************ 00:53:37.315 END TEST nvmf_nsid 00:53:37.315 ************************************ 00:53:37.315 10:04:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:53:37.315 10:04:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:53:37.315 00:53:37.315 real 7m9.586s 00:53:37.315 user 17m8.038s 00:53:37.315 sys 1m27.500s 00:53:37.315 10:04:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:37.315 10:04:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:53:37.315 ************************************ 00:53:37.315 END TEST nvmf_target_extra 00:53:37.315 ************************************ 00:53:37.316 10:04:44 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:53:37.316 10:04:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:37.316 10:04:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:37.316 10:04:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:37.316 ************************************ 00:53:37.316 START TEST nvmf_host 00:53:37.316 ************************************ 00:53:37.316 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:53:37.576 * Looking for test storage... 00:53:37.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:37.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:37.576 --rc genhtml_branch_coverage=1 00:53:37.576 --rc genhtml_function_coverage=1 00:53:37.576 --rc genhtml_legend=1 00:53:37.576 --rc geninfo_all_blocks=1 00:53:37.576 --rc geninfo_unexecuted_blocks=1 00:53:37.576 00:53:37.576 ' 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:37.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:37.576 --rc genhtml_branch_coverage=1 00:53:37.576 --rc genhtml_function_coverage=1 00:53:37.576 --rc genhtml_legend=1 00:53:37.576 --rc geninfo_all_blocks=1 00:53:37.576 --rc geninfo_unexecuted_blocks=1 00:53:37.576 00:53:37.576 ' 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:37.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:37.576 --rc genhtml_branch_coverage=1 00:53:37.576 --rc genhtml_function_coverage=1 00:53:37.576 --rc genhtml_legend=1 00:53:37.576 --rc geninfo_all_blocks=1 00:53:37.576 --rc geninfo_unexecuted_blocks=1 00:53:37.576 00:53:37.576 ' 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:37.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:37.576 --rc genhtml_branch_coverage=1 00:53:37.576 --rc genhtml_function_coverage=1 00:53:37.576 --rc genhtml_legend=1 00:53:37.576 --rc geninfo_all_blocks=1 00:53:37.576 --rc geninfo_unexecuted_blocks=1 00:53:37.576 00:53:37.576 ' 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:37.576 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:37.577 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:37.577 10:04:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:53:37.838 ************************************ 00:53:37.838 START TEST nvmf_multicontroller 00:53:37.838 ************************************ 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:53:37.838 * Looking for test storage... 00:53:37.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:37.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:37.838 --rc genhtml_branch_coverage=1 00:53:37.838 --rc genhtml_function_coverage=1 00:53:37.838 --rc genhtml_legend=1 00:53:37.838 --rc geninfo_all_blocks=1 00:53:37.838 --rc geninfo_unexecuted_blocks=1 00:53:37.838 00:53:37.838 ' 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:37.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:37.838 --rc genhtml_branch_coverage=1 00:53:37.838 --rc genhtml_function_coverage=1 00:53:37.838 --rc genhtml_legend=1 00:53:37.838 --rc geninfo_all_blocks=1 00:53:37.838 --rc geninfo_unexecuted_blocks=1 00:53:37.838 00:53:37.838 ' 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:37.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:37.838 --rc genhtml_branch_coverage=1 00:53:37.838 --rc genhtml_function_coverage=1 00:53:37.838 --rc genhtml_legend=1 00:53:37.838 --rc geninfo_all_blocks=1 00:53:37.838 --rc geninfo_unexecuted_blocks=1 00:53:37.838 00:53:37.838 ' 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:37.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:37.838 --rc genhtml_branch_coverage=1 00:53:37.838 --rc genhtml_function_coverage=1 00:53:37.838 --rc genhtml_legend=1 00:53:37.838 --rc geninfo_all_blocks=1 00:53:37.838 --rc geninfo_unexecuted_blocks=1 00:53:37.838 00:53:37.838 ' 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:37.838 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:38.099 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:53:38.099 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:38.100 Cannot find device "nvmf_init_br" 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:38.100 Cannot find device "nvmf_init_br2" 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:38.100 Cannot find device "nvmf_tgt_br" 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:38.100 Cannot find device "nvmf_tgt_br2" 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:38.100 Cannot find device "nvmf_init_br" 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:53:38.100 10:04:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:38.100 Cannot find device "nvmf_init_br2" 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:38.100 Cannot find device "nvmf_tgt_br" 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:38.100 Cannot find device "nvmf_tgt_br2" 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:38.100 Cannot find device "nvmf_br" 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:38.100 Cannot find device "nvmf_init_if" 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:38.100 Cannot find device "nvmf_init_if2" 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:38.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:38.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:38.100 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:38.361 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:38.361 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:38.361 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:38.361 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:38.361 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:38.361 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:38.361 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:38.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:38.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.172 ms 00:53:38.362 00:53:38.362 --- 10.0.0.3 ping statistics --- 00:53:38.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:38.362 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:53:38.362 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:38.624 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:38.625 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:53:38.625 00:53:38.625 --- 10.0.0.4 ping statistics --- 00:53:38.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:38.625 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:38.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:38.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:53:38.625 00:53:38.625 --- 10.0.0.1 ping statistics --- 00:53:38.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:38.625 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:38.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:38.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:53:38.625 00:53:38.625 --- 10.0.0.2 ping statistics --- 00:53:38.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:38.625 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=86919 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 86919 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86919 ']' 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:38.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:38.625 10:04:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:38.625 [2024-12-09 10:04:45.527114] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:38.625 [2024-12-09 10:04:45.527180] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:38.885 [2024-12-09 10:04:45.670737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:53:38.885 [2024-12-09 10:04:45.730175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:38.885 [2024-12-09 10:04:45.730223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:38.885 [2024-12-09 10:04:45.730230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:38.885 [2024-12-09 10:04:45.730235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:38.885 [2024-12-09 10:04:45.730239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:38.885 [2024-12-09 10:04:45.731115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:38.885 [2024-12-09 10:04:45.731225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:38.885 [2024-12-09 10:04:45.731225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:53:39.455 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:39.455 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:53:39.455 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:39.455 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:39.455 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:39.715 [2024-12-09 10:04:46.514192] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:39.715 Malloc0 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:39.715 [2024-12-09 10:04:46.586296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:39.715 [2024-12-09 10:04:46.598245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:39.715 Malloc1 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86971 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86971 /var/tmp/bdevperf.sock 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86971 ']' 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:39.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:39.715 10:04:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:40.653 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:40.653 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:53:40.653 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:53:40.653 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:40.653 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:40.926 NVMe0n1 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:40.926 1 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:40.926 2024/12/09 10:04:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:53:40.926 request: 00:53:40.926 { 00:53:40.926 "method": "bdev_nvme_attach_controller", 00:53:40.926 "params": { 00:53:40.926 "name": "NVMe0", 00:53:40.926 "trtype": "tcp", 00:53:40.926 "traddr": "10.0.0.3", 00:53:40.926 "adrfam": "ipv4", 00:53:40.926 "trsvcid": "4420", 00:53:40.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:53:40.926 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:53:40.926 "hostaddr": "10.0.0.1", 00:53:40.926 "prchk_reftag": false, 00:53:40.926 "prchk_guard": false, 00:53:40.926 "hdgst": false, 00:53:40.926 "ddgst": false, 00:53:40.926 "allow_unrecognized_csi": false 00:53:40.926 } 00:53:40.926 } 00:53:40.926 Got JSON-RPC error response 00:53:40.926 GoRPCClient: error on JSON-RPC call 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:40.926 2024/12/09 10:04:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:53:40.926 request: 00:53:40.926 { 00:53:40.926 "method": "bdev_nvme_attach_controller", 00:53:40.926 "params": { 00:53:40.926 "name": "NVMe0", 00:53:40.926 "trtype": "tcp", 00:53:40.926 "traddr": "10.0.0.3", 00:53:40.926 "adrfam": "ipv4", 00:53:40.926 "trsvcid": "4420", 00:53:40.926 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:53:40.926 "hostaddr": "10.0.0.1", 00:53:40.926 "prchk_reftag": false, 00:53:40.926 "prchk_guard": false, 00:53:40.926 "hdgst": false, 00:53:40.926 "ddgst": false, 00:53:40.926 "allow_unrecognized_csi": false 00:53:40.926 } 00:53:40.926 } 00:53:40.926 Got JSON-RPC error response 00:53:40.926 GoRPCClient: error on JSON-RPC call 00:53:40.926 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:40.927 2024/12/09 10:04:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:53:40.927 request: 00:53:40.927 { 00:53:40.927 "method": "bdev_nvme_attach_controller", 00:53:40.927 "params": { 00:53:40.927 "name": "NVMe0", 00:53:40.927 "trtype": "tcp", 00:53:40.927 "traddr": "10.0.0.3", 00:53:40.927 "adrfam": "ipv4", 00:53:40.927 "trsvcid": "4420", 00:53:40.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:53:40.927 "hostaddr": "10.0.0.1", 00:53:40.927 "prchk_reftag": false, 00:53:40.927 "prchk_guard": false, 00:53:40.927 "hdgst": false, 00:53:40.927 "ddgst": false, 00:53:40.927 "multipath": "disable", 00:53:40.927 "allow_unrecognized_csi": false 00:53:40.927 } 00:53:40.927 } 00:53:40.927 Got JSON-RPC error response 00:53:40.927 GoRPCClient: error on JSON-RPC call 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:40.927 2024/12/09 10:04:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:53:40.927 request: 00:53:40.927 { 00:53:40.927 "method": "bdev_nvme_attach_controller", 00:53:40.927 "params": { 00:53:40.927 "name": "NVMe0", 00:53:40.927 "trtype": "tcp", 00:53:40.927 "traddr": "10.0.0.3", 00:53:40.927 "adrfam": "ipv4", 00:53:40.927 "trsvcid": "4420", 00:53:40.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:53:40.927 "hostaddr": "10.0.0.1", 00:53:40.927 "prchk_reftag": false, 00:53:40.927 "prchk_guard": false, 00:53:40.927 "hdgst": false, 00:53:40.927 "ddgst": false, 00:53:40.927 "multipath": "failover", 00:53:40.927 "allow_unrecognized_csi": false 00:53:40.927 } 00:53:40.927 } 00:53:40.927 Got JSON-RPC error response 00:53:40.927 GoRPCClient: error on JSON-RPC call 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:40.927 NVMe0n1 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:40.927 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:41.201 00:53:41.201 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:41.201 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:53:41.201 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:41.201 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:53:41.201 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:41.201 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:41.201 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:53:41.201 10:04:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:53:42.139 { 00:53:42.139 "results": [ 00:53:42.139 { 00:53:42.139 "job": "NVMe0n1", 00:53:42.139 "core_mask": "0x1", 00:53:42.139 "workload": "write", 00:53:42.139 "status": "finished", 00:53:42.139 "queue_depth": 128, 00:53:42.139 "io_size": 4096, 00:53:42.139 "runtime": 1.006319, 00:53:42.139 "iops": 22538.57872106161, 00:53:42.139 "mibps": 88.04132312914692, 00:53:42.139 "io_failed": 0, 00:53:42.139 "io_timeout": 0, 00:53:42.139 "avg_latency_us": 5666.306209783731, 00:53:42.139 "min_latency_us": 3076.471615720524, 00:53:42.139 "max_latency_us": 13336.14672489083 00:53:42.139 } 00:53:42.139 ], 00:53:42.139 "core_count": 1 00:53:42.139 } 00:53:42.139 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:53:42.139 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:42.139 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:42.139 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:42.139 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:53:42.139 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:53:42.139 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:42.139 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:42.398 nvme1n1 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:42.398 nvme1n1 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 86971 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86971 ']' 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86971 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:42.398 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86971 00:53:42.657 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:42.657 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:42.657 killing process with pid 86971 00:53:42.657 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86971' 00:53:42.657 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86971 00:53:42.657 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86971 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:53:42.918 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:53:42.918 [2024-12-09 10:04:46.729408] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:42.918 [2024-12-09 10:04:46.729498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86971 ] 00:53:42.918 [2024-12-09 10:04:46.877918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:42.918 [2024-12-09 10:04:46.959751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:42.918 [2024-12-09 10:04:47.963843] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 67d25c26-556c-400c-96ab-abe044ee03d3 already exists 00:53:42.918 [2024-12-09 10:04:47.963930] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:67d25c26-556c-400c-96ab-abe044ee03d3 alias for bdev NVMe1n1 00:53:42.918 [2024-12-09 10:04:47.963944] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:53:42.918 Running I/O for 1 seconds... 00:53:42.918 22488.00 IOPS, 87.84 MiB/s 00:53:42.918 Latency(us) 00:53:42.918 [2024-12-09T10:04:49.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:42.918 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:53:42.918 NVMe0n1 : 1.01 22538.58 88.04 0.00 0.00 5666.31 3076.47 13336.15 00:53:42.918 [2024-12-09T10:04:49.962Z] =================================================================================================================== 00:53:42.918 [2024-12-09T10:04:49.962Z] Total : 22538.58 88.04 0.00 0.00 5666.31 3076.47 13336.15 00:53:42.918 Received shutdown signal, test time was about 1.000000 seconds 00:53:42.918 00:53:42.918 Latency(us) 00:53:42.918 [2024-12-09T10:04:49.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:42.918 [2024-12-09T10:04:49.962Z] =================================================================================================================== 00:53:42.918 [2024-12-09T10:04:49.962Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:42.918 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:42.918 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:42.918 rmmod nvme_tcp 00:53:42.918 rmmod nvme_fabrics 00:53:42.918 rmmod nvme_keyring 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 86919 ']' 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 86919 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86919 ']' 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86919 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86919 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:42.919 killing process with pid 86919 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86919' 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86919 00:53:42.919 10:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86919 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:43.179 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:43.438 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:53:43.439 00:53:43.439 real 0m5.813s 00:53:43.439 user 0m16.932s 00:53:43.439 sys 0m1.537s 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:43.439 10:04:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:53:43.439 ************************************ 00:53:43.439 END TEST nvmf_multicontroller 00:53:43.439 ************************************ 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:53:43.699 ************************************ 00:53:43.699 START TEST nvmf_aer 00:53:43.699 ************************************ 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:53:43.699 * Looking for test storage... 00:53:43.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:43.699 --rc genhtml_branch_coverage=1 00:53:43.699 --rc genhtml_function_coverage=1 00:53:43.699 --rc genhtml_legend=1 00:53:43.699 --rc geninfo_all_blocks=1 00:53:43.699 --rc geninfo_unexecuted_blocks=1 00:53:43.699 00:53:43.699 ' 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:43.699 --rc genhtml_branch_coverage=1 00:53:43.699 --rc genhtml_function_coverage=1 00:53:43.699 --rc genhtml_legend=1 00:53:43.699 --rc geninfo_all_blocks=1 00:53:43.699 --rc geninfo_unexecuted_blocks=1 00:53:43.699 00:53:43.699 ' 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:43.699 --rc genhtml_branch_coverage=1 00:53:43.699 --rc genhtml_function_coverage=1 00:53:43.699 --rc genhtml_legend=1 00:53:43.699 --rc geninfo_all_blocks=1 00:53:43.699 --rc geninfo_unexecuted_blocks=1 00:53:43.699 00:53:43.699 ' 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:43.699 --rc genhtml_branch_coverage=1 00:53:43.699 --rc genhtml_function_coverage=1 00:53:43.699 --rc genhtml_legend=1 00:53:43.699 --rc geninfo_all_blocks=1 00:53:43.699 --rc geninfo_unexecuted_blocks=1 00:53:43.699 00:53:43.699 ' 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:43.699 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:43.960 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:43.960 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:43.961 Cannot find device "nvmf_init_br" 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:43.961 Cannot find device "nvmf_init_br2" 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:43.961 Cannot find device "nvmf_tgt_br" 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:43.961 Cannot find device "nvmf_tgt_br2" 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:43.961 Cannot find device "nvmf_init_br" 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:43.961 Cannot find device "nvmf_init_br2" 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:43.961 Cannot find device "nvmf_tgt_br" 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:43.961 Cannot find device "nvmf_tgt_br2" 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:43.961 Cannot find device "nvmf_br" 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:43.961 Cannot find device "nvmf_init_if" 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:43.961 Cannot find device "nvmf_init_if2" 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:43.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:43.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:43.961 10:04:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:44.228 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:44.228 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.142 ms 00:53:44.228 00:53:44.228 --- 10.0.0.3 ping statistics --- 00:53:44.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:44.228 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:44.228 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:44.228 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:53:44.228 00:53:44.228 --- 10.0.0.4 ping statistics --- 00:53:44.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:44.228 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:44.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:44.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:53:44.228 00:53:44.228 --- 10.0.0.1 ping statistics --- 00:53:44.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:44.228 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:44.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:44.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:53:44.228 00:53:44.228 --- 10.0.0.2 ping statistics --- 00:53:44.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:44.228 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:53:44.228 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:44.229 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:44.229 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:44.229 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=87286 00:53:44.229 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:53:44.229 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 87286 00:53:44.229 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 87286 ']' 00:53:44.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:44.229 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:44.229 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:44.229 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:44.229 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:44.229 10:04:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:44.229 [2024-12-09 10:04:51.247270] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:44.229 [2024-12-09 10:04:51.247344] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:44.489 [2024-12-09 10:04:51.388170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:53:44.489 [2024-12-09 10:04:51.485769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:44.489 [2024-12-09 10:04:51.485849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:44.489 [2024-12-09 10:04:51.485857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:44.489 [2024-12-09 10:04:51.485862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:44.489 [2024-12-09 10:04:51.485866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:44.489 [2024-12-09 10:04:51.487126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:44.489 [2024-12-09 10:04:51.487196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:44.489 [2024-12-09 10:04:51.487342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:53:44.489 [2024-12-09 10:04:51.487344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.427 [2024-12-09 10:04:52.210162] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.427 Malloc0 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:53:45.427 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.428 [2024-12-09 10:04:52.288397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.428 [ 00:53:45.428 { 00:53:45.428 "allow_any_host": true, 00:53:45.428 "hosts": [], 00:53:45.428 "listen_addresses": [], 00:53:45.428 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:53:45.428 "subtype": "Discovery" 00:53:45.428 }, 00:53:45.428 { 00:53:45.428 "allow_any_host": true, 00:53:45.428 "hosts": [], 00:53:45.428 "listen_addresses": [ 00:53:45.428 { 00:53:45.428 "adrfam": "IPv4", 00:53:45.428 "traddr": "10.0.0.3", 00:53:45.428 "trsvcid": "4420", 00:53:45.428 "trtype": "TCP" 00:53:45.428 } 00:53:45.428 ], 00:53:45.428 "max_cntlid": 65519, 00:53:45.428 "max_namespaces": 2, 00:53:45.428 "min_cntlid": 1, 00:53:45.428 "model_number": "SPDK bdev Controller", 00:53:45.428 "namespaces": [ 00:53:45.428 { 00:53:45.428 "bdev_name": "Malloc0", 00:53:45.428 "name": "Malloc0", 00:53:45.428 "nguid": "8B577E98E5DC478393173486349166FA", 00:53:45.428 "nsid": 1, 00:53:45.428 "uuid": "8b577e98-e5dc-4783-9317-3486349166fa" 00:53:45.428 } 00:53:45.428 ], 00:53:45.428 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:53:45.428 "serial_number": "SPDK00000000000001", 00:53:45.428 "subtype": "NVMe" 00:53:45.428 } 00:53:45.428 ] 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=87340 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:53:45.428 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.688 Malloc1 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.688 Asynchronous Event Request test 00:53:45.688 Attaching to 10.0.0.3 00:53:45.688 Attached to 10.0.0.3 00:53:45.688 Registering asynchronous event callbacks... 00:53:45.688 Starting namespace attribute notice tests for all controllers... 00:53:45.688 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:53:45.688 aer_cb - Changed Namespace 00:53:45.688 Cleaning up... 00:53:45.688 [ 00:53:45.688 { 00:53:45.688 "allow_any_host": true, 00:53:45.688 "hosts": [], 00:53:45.688 "listen_addresses": [], 00:53:45.688 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:53:45.688 "subtype": "Discovery" 00:53:45.688 }, 00:53:45.688 { 00:53:45.688 "allow_any_host": true, 00:53:45.688 "hosts": [], 00:53:45.688 "listen_addresses": [ 00:53:45.688 { 00:53:45.688 "adrfam": "IPv4", 00:53:45.688 "traddr": "10.0.0.3", 00:53:45.688 "trsvcid": "4420", 00:53:45.688 "trtype": "TCP" 00:53:45.688 } 00:53:45.688 ], 00:53:45.688 "max_cntlid": 65519, 00:53:45.688 "max_namespaces": 2, 00:53:45.688 "min_cntlid": 1, 00:53:45.688 "model_number": "SPDK bdev Controller", 00:53:45.688 "namespaces": [ 00:53:45.688 { 00:53:45.688 "bdev_name": "Malloc0", 00:53:45.688 "name": "Malloc0", 00:53:45.688 "nguid": "8B577E98E5DC478393173486349166FA", 00:53:45.688 "nsid": 1, 00:53:45.688 "uuid": "8b577e98-e5dc-4783-9317-3486349166fa" 00:53:45.688 }, 00:53:45.688 { 00:53:45.688 "bdev_name": "Malloc1", 00:53:45.688 "name": "Malloc1", 00:53:45.688 "nguid": "CBAA47A9D662415BAAA1A831154FA4F8", 00:53:45.688 "nsid": 2, 00:53:45.688 "uuid": "cbaa47a9-d662-415b-aaa1-a831154fa4f8" 00:53:45.688 } 00:53:45.688 ], 00:53:45.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:53:45.688 "serial_number": "SPDK00000000000001", 00:53:45.688 "subtype": "NVMe" 00:53:45.688 } 00:53:45.688 ] 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 87340 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:45.688 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:45.948 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:45.949 rmmod nvme_tcp 00:53:45.949 rmmod nvme_fabrics 00:53:45.949 rmmod nvme_keyring 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 87286 ']' 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 87286 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 87286 ']' 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 87286 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87286 00:53:45.949 killing process with pid 87286 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87286' 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 87286 00:53:45.949 10:04:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 87286 00:53:46.208 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:46.208 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:46.208 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:46.209 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:53:46.209 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:53:46.209 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:46.209 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:53:46.209 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:46.209 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:46.209 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:46.209 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:53:46.469 00:53:46.469 real 0m2.994s 00:53:46.469 user 0m7.162s 00:53:46.469 sys 0m0.953s 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:46.469 10:04:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:53:46.469 ************************************ 00:53:46.469 END TEST nvmf_aer 00:53:46.469 ************************************ 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:53:46.729 ************************************ 00:53:46.729 START TEST nvmf_async_init 00:53:46.729 ************************************ 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:53:46.729 * Looking for test storage... 00:53:46.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:53:46.729 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:53:46.989 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:46.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:46.990 --rc genhtml_branch_coverage=1 00:53:46.990 --rc genhtml_function_coverage=1 00:53:46.990 --rc genhtml_legend=1 00:53:46.990 --rc geninfo_all_blocks=1 00:53:46.990 --rc geninfo_unexecuted_blocks=1 00:53:46.990 00:53:46.990 ' 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:46.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:46.990 --rc genhtml_branch_coverage=1 00:53:46.990 --rc genhtml_function_coverage=1 00:53:46.990 --rc genhtml_legend=1 00:53:46.990 --rc geninfo_all_blocks=1 00:53:46.990 --rc geninfo_unexecuted_blocks=1 00:53:46.990 00:53:46.990 ' 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:46.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:46.990 --rc genhtml_branch_coverage=1 00:53:46.990 --rc genhtml_function_coverage=1 00:53:46.990 --rc genhtml_legend=1 00:53:46.990 --rc geninfo_all_blocks=1 00:53:46.990 --rc geninfo_unexecuted_blocks=1 00:53:46.990 00:53:46.990 ' 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:46.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:46.990 --rc genhtml_branch_coverage=1 00:53:46.990 --rc genhtml_function_coverage=1 00:53:46.990 --rc genhtml_legend=1 00:53:46.990 --rc geninfo_all_blocks=1 00:53:46.990 --rc geninfo_unexecuted_blocks=1 00:53:46.990 00:53:46.990 ' 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:46.990 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0bdd3456118f40cfb9cc8b48191c7a06 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:46.990 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:46.991 Cannot find device "nvmf_init_br" 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:46.991 Cannot find device "nvmf_init_br2" 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:46.991 Cannot find device "nvmf_tgt_br" 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:46.991 Cannot find device "nvmf_tgt_br2" 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:46.991 Cannot find device "nvmf_init_br" 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:46.991 Cannot find device "nvmf_init_br2" 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:46.991 Cannot find device "nvmf_tgt_br" 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:46.991 Cannot find device "nvmf_tgt_br2" 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:46.991 Cannot find device "nvmf_br" 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:53:46.991 10:04:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:46.991 Cannot find device "nvmf_init_if" 00:53:46.991 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:53:46.991 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:46.991 Cannot find device "nvmf_init_if2" 00:53:46.991 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:53:46.991 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:47.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:47.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:47.251 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:47.251 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:53:47.251 00:53:47.251 --- 10.0.0.3 ping statistics --- 00:53:47.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:47.251 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:47.251 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:47.251 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:53:47.251 00:53:47.251 --- 10.0.0.4 ping statistics --- 00:53:47.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:47.251 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:47.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:47.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:53:47.251 00:53:47.251 --- 10.0.0.1 ping statistics --- 00:53:47.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:47.251 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:47.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:47.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:53:47.251 00:53:47.251 --- 10.0.0.2 ping statistics --- 00:53:47.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:47.251 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:47.251 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:47.252 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:47.252 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:47.252 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=87566 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 87566 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 87566 ']' 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:47.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:47.511 10:04:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:47.511 [2024-12-09 10:04:54.359823] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:47.511 [2024-12-09 10:04:54.359892] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:47.511 [2024-12-09 10:04:54.507845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:47.771 [2024-12-09 10:04:54.587435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:47.771 [2024-12-09 10:04:54.587501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:47.771 [2024-12-09 10:04:54.587509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:47.771 [2024-12-09 10:04:54.587516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:47.771 [2024-12-09 10:04:54.587521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:47.771 [2024-12-09 10:04:54.587888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.339 [2024-12-09 10:04:55.319575] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.339 null0 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0bdd3456118f40cfb9cc8b48191c7a06 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.339 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.339 [2024-12-09 10:04:55.379635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:48.608 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.608 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:53:48.608 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.608 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.608 nvme0n1 00:53:48.608 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.608 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:53:48.608 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.608 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.608 [ 00:53:48.608 { 00:53:48.608 "aliases": [ 00:53:48.608 "0bdd3456-118f-40cf-b9cc-8b48191c7a06" 00:53:48.608 ], 00:53:48.608 "assigned_rate_limits": { 00:53:48.608 "r_mbytes_per_sec": 0, 00:53:48.608 "rw_ios_per_sec": 0, 00:53:48.608 "rw_mbytes_per_sec": 0, 00:53:48.608 "w_mbytes_per_sec": 0 00:53:48.608 }, 00:53:48.608 "block_size": 512, 00:53:48.608 "claimed": false, 00:53:48.608 "driver_specific": { 00:53:48.608 "mp_policy": "active_passive", 00:53:48.608 "nvme": [ 00:53:48.608 { 00:53:48.608 "ctrlr_data": { 00:53:48.608 "ana_reporting": false, 00:53:48.608 "cntlid": 1, 00:53:48.608 "firmware_revision": "25.01", 00:53:48.608 "model_number": "SPDK bdev Controller", 00:53:48.608 "multi_ctrlr": true, 00:53:48.608 "oacs": { 00:53:48.608 "firmware": 0, 00:53:48.608 "format": 0, 00:53:48.608 "ns_manage": 0, 00:53:48.608 "security": 0 00:53:48.608 }, 00:53:48.608 "serial_number": "00000000000000000000", 00:53:48.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:53:48.608 "vendor_id": "0x8086" 00:53:48.608 }, 00:53:48.608 "ns_data": { 00:53:48.608 "can_share": true, 00:53:48.608 "id": 1 00:53:48.608 }, 00:53:48.608 "trid": { 00:53:48.608 "adrfam": "IPv4", 00:53:48.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:53:48.608 "traddr": "10.0.0.3", 00:53:48.608 "trsvcid": "4420", 00:53:48.608 "trtype": "TCP" 00:53:48.608 }, 00:53:48.608 "vs": { 00:53:48.608 "nvme_version": "1.3" 00:53:48.608 } 00:53:48.608 } 00:53:48.608 ] 00:53:48.608 }, 00:53:48.608 "memory_domains": [ 00:53:48.608 { 00:53:48.608 "dma_device_id": "system", 00:53:48.608 "dma_device_type": 1 00:53:48.608 } 00:53:48.608 ], 00:53:48.608 "name": "nvme0n1", 00:53:48.608 "num_blocks": 2097152, 00:53:48.608 "numa_id": -1, 00:53:48.608 "product_name": "NVMe disk", 00:53:48.608 "supported_io_types": { 00:53:48.608 "abort": true, 00:53:48.608 "compare": true, 00:53:48.608 "compare_and_write": true, 00:53:48.868 "copy": true, 00:53:48.868 "flush": true, 00:53:48.868 "get_zone_info": false, 00:53:48.868 "nvme_admin": true, 00:53:48.868 "nvme_io": true, 00:53:48.868 "nvme_io_md": false, 00:53:48.868 "nvme_iov_md": false, 00:53:48.868 "read": true, 00:53:48.868 "reset": true, 00:53:48.868 "seek_data": false, 00:53:48.868 "seek_hole": false, 00:53:48.868 "unmap": false, 00:53:48.868 "write": true, 00:53:48.868 "write_zeroes": true, 00:53:48.868 "zcopy": false, 00:53:48.868 "zone_append": false, 00:53:48.868 "zone_management": false 00:53:48.868 }, 00:53:48.868 "uuid": "0bdd3456-118f-40cf-b9cc-8b48191c7a06", 00:53:48.868 "zoned": false 00:53:48.868 } 00:53:48.868 ] 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.868 [2024-12-09 10:04:55.653349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:53:48.868 [2024-12-09 10:04:55.653467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148f360 (9): Bad file descriptor 00:53:48.868 [2024-12-09 10:04:55.785714] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.868 [ 00:53:48.868 { 00:53:48.868 "aliases": [ 00:53:48.868 "0bdd3456-118f-40cf-b9cc-8b48191c7a06" 00:53:48.868 ], 00:53:48.868 "assigned_rate_limits": { 00:53:48.868 "r_mbytes_per_sec": 0, 00:53:48.868 "rw_ios_per_sec": 0, 00:53:48.868 "rw_mbytes_per_sec": 0, 00:53:48.868 "w_mbytes_per_sec": 0 00:53:48.868 }, 00:53:48.868 "block_size": 512, 00:53:48.868 "claimed": false, 00:53:48.868 "driver_specific": { 00:53:48.868 "mp_policy": "active_passive", 00:53:48.868 "nvme": [ 00:53:48.868 { 00:53:48.868 "ctrlr_data": { 00:53:48.868 "ana_reporting": false, 00:53:48.868 "cntlid": 2, 00:53:48.868 "firmware_revision": "25.01", 00:53:48.868 "model_number": "SPDK bdev Controller", 00:53:48.868 "multi_ctrlr": true, 00:53:48.868 "oacs": { 00:53:48.868 "firmware": 0, 00:53:48.868 "format": 0, 00:53:48.868 "ns_manage": 0, 00:53:48.868 "security": 0 00:53:48.868 }, 00:53:48.868 "serial_number": "00000000000000000000", 00:53:48.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:53:48.868 "vendor_id": "0x8086" 00:53:48.868 }, 00:53:48.868 "ns_data": { 00:53:48.868 "can_share": true, 00:53:48.868 "id": 1 00:53:48.868 }, 00:53:48.868 "trid": { 00:53:48.868 "adrfam": "IPv4", 00:53:48.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:53:48.868 "traddr": "10.0.0.3", 00:53:48.868 "trsvcid": "4420", 00:53:48.868 "trtype": "TCP" 00:53:48.868 }, 00:53:48.868 "vs": { 00:53:48.868 "nvme_version": "1.3" 00:53:48.868 } 00:53:48.868 } 00:53:48.868 ] 00:53:48.868 }, 00:53:48.868 "memory_domains": [ 00:53:48.868 { 00:53:48.868 "dma_device_id": "system", 00:53:48.868 "dma_device_type": 1 00:53:48.868 } 00:53:48.868 ], 00:53:48.868 "name": "nvme0n1", 00:53:48.868 "num_blocks": 2097152, 00:53:48.868 "numa_id": -1, 00:53:48.868 "product_name": "NVMe disk", 00:53:48.868 "supported_io_types": { 00:53:48.868 "abort": true, 00:53:48.868 "compare": true, 00:53:48.868 "compare_and_write": true, 00:53:48.868 "copy": true, 00:53:48.868 "flush": true, 00:53:48.868 "get_zone_info": false, 00:53:48.868 "nvme_admin": true, 00:53:48.868 "nvme_io": true, 00:53:48.868 "nvme_io_md": false, 00:53:48.868 "nvme_iov_md": false, 00:53:48.868 "read": true, 00:53:48.868 "reset": true, 00:53:48.868 "seek_data": false, 00:53:48.868 "seek_hole": false, 00:53:48.868 "unmap": false, 00:53:48.868 "write": true, 00:53:48.868 "write_zeroes": true, 00:53:48.868 "zcopy": false, 00:53:48.868 "zone_append": false, 00:53:48.868 "zone_management": false 00:53:48.868 }, 00:53:48.868 "uuid": "0bdd3456-118f-40cf-b9cc-8b48191c7a06", 00:53:48.868 "zoned": false 00:53:48.868 } 00:53:48.868 ] 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.uj5m0Lv7oz 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.uj5m0Lv7oz 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.uj5m0Lv7oz 00:53:48.868 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.869 [2024-12-09 10:04:55.889037] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:53:48.869 [2024-12-09 10:04:55.889326] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:48.869 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:49.129 [2024-12-09 10:04:55.912993] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:53:49.129 nvme0n1 00:53:49.129 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:49.129 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:53:49.129 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:49.129 10:04:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:49.129 [ 00:53:49.129 { 00:53:49.129 "aliases": [ 00:53:49.129 "0bdd3456-118f-40cf-b9cc-8b48191c7a06" 00:53:49.129 ], 00:53:49.129 "assigned_rate_limits": { 00:53:49.129 "r_mbytes_per_sec": 0, 00:53:49.129 "rw_ios_per_sec": 0, 00:53:49.129 "rw_mbytes_per_sec": 0, 00:53:49.129 "w_mbytes_per_sec": 0 00:53:49.129 }, 00:53:49.129 "block_size": 512, 00:53:49.129 "claimed": false, 00:53:49.129 "driver_specific": { 00:53:49.129 "mp_policy": "active_passive", 00:53:49.129 "nvme": [ 00:53:49.129 { 00:53:49.129 "ctrlr_data": { 00:53:49.129 "ana_reporting": false, 00:53:49.129 "cntlid": 3, 00:53:49.129 "firmware_revision": "25.01", 00:53:49.129 "model_number": "SPDK bdev Controller", 00:53:49.129 "multi_ctrlr": true, 00:53:49.129 "oacs": { 00:53:49.129 "firmware": 0, 00:53:49.129 "format": 0, 00:53:49.129 "ns_manage": 0, 00:53:49.129 "security": 0 00:53:49.129 }, 00:53:49.129 "serial_number": "00000000000000000000", 00:53:49.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:53:49.129 "vendor_id": "0x8086" 00:53:49.129 }, 00:53:49.129 "ns_data": { 00:53:49.129 "can_share": true, 00:53:49.129 "id": 1 00:53:49.129 }, 00:53:49.129 "trid": { 00:53:49.129 "adrfam": "IPv4", 00:53:49.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:53:49.129 "traddr": "10.0.0.3", 00:53:49.129 "trsvcid": "4421", 00:53:49.129 "trtype": "TCP" 00:53:49.129 }, 00:53:49.129 "vs": { 00:53:49.129 "nvme_version": "1.3" 00:53:49.129 } 00:53:49.129 } 00:53:49.129 ] 00:53:49.129 }, 00:53:49.129 "memory_domains": [ 00:53:49.129 { 00:53:49.129 "dma_device_id": "system", 00:53:49.129 "dma_device_type": 1 00:53:49.129 } 00:53:49.129 ], 00:53:49.129 "name": "nvme0n1", 00:53:49.129 "num_blocks": 2097152, 00:53:49.129 "numa_id": -1, 00:53:49.129 "product_name": "NVMe disk", 00:53:49.129 "supported_io_types": { 00:53:49.129 "abort": true, 00:53:49.129 "compare": true, 00:53:49.129 "compare_and_write": true, 00:53:49.129 "copy": true, 00:53:49.129 "flush": true, 00:53:49.129 "get_zone_info": false, 00:53:49.129 "nvme_admin": true, 00:53:49.129 "nvme_io": true, 00:53:49.129 "nvme_io_md": false, 00:53:49.129 "nvme_iov_md": false, 00:53:49.129 "read": true, 00:53:49.129 "reset": true, 00:53:49.129 "seek_data": false, 00:53:49.129 "seek_hole": false, 00:53:49.129 "unmap": false, 00:53:49.129 "write": true, 00:53:49.129 "write_zeroes": true, 00:53:49.129 "zcopy": false, 00:53:49.129 "zone_append": false, 00:53:49.129 "zone_management": false 00:53:49.129 }, 00:53:49.129 "uuid": "0bdd3456-118f-40cf-b9cc-8b48191c7a06", 00:53:49.129 "zoned": false 00:53:49.129 } 00:53:49.129 ] 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.uj5m0Lv7oz 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:49.129 rmmod nvme_tcp 00:53:49.129 rmmod nvme_fabrics 00:53:49.129 rmmod nvme_keyring 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 87566 ']' 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 87566 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 87566 ']' 00:53:49.129 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 87566 00:53:49.389 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:53:49.389 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:49.389 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87566 00:53:49.389 killing process with pid 87566 00:53:49.389 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:49.389 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:49.390 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87566' 00:53:49.390 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 87566 00:53:49.390 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 87566 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:49.649 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:53:49.909 00:53:49.909 real 0m3.212s 00:53:49.909 user 0m2.554s 00:53:49.909 sys 0m0.953s 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:53:49.909 ************************************ 00:53:49.909 END TEST nvmf_async_init 00:53:49.909 ************************************ 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:53:49.909 ************************************ 00:53:49.909 START TEST dma 00:53:49.909 ************************************ 00:53:49.909 10:04:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:53:50.170 * Looking for test storage... 00:53:50.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:53:50.170 10:04:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:50.170 10:04:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:53:50.170 10:04:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:50.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:50.170 --rc genhtml_branch_coverage=1 00:53:50.170 --rc genhtml_function_coverage=1 00:53:50.170 --rc genhtml_legend=1 00:53:50.170 --rc geninfo_all_blocks=1 00:53:50.170 --rc geninfo_unexecuted_blocks=1 00:53:50.170 00:53:50.170 ' 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:50.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:50.170 --rc genhtml_branch_coverage=1 00:53:50.170 --rc genhtml_function_coverage=1 00:53:50.170 --rc genhtml_legend=1 00:53:50.170 --rc geninfo_all_blocks=1 00:53:50.170 --rc geninfo_unexecuted_blocks=1 00:53:50.170 00:53:50.170 ' 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:50.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:50.170 --rc genhtml_branch_coverage=1 00:53:50.170 --rc genhtml_function_coverage=1 00:53:50.170 --rc genhtml_legend=1 00:53:50.170 --rc geninfo_all_blocks=1 00:53:50.170 --rc geninfo_unexecuted_blocks=1 00:53:50.170 00:53:50.170 ' 00:53:50.170 10:04:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:50.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:50.170 --rc genhtml_branch_coverage=1 00:53:50.170 --rc genhtml_function_coverage=1 00:53:50.170 --rc genhtml_legend=1 00:53:50.170 --rc geninfo_all_blocks=1 00:53:50.171 --rc geninfo_unexecuted_blocks=1 00:53:50.171 00:53:50.171 ' 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:50.171 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:53:50.171 00:53:50.171 real 0m0.286s 00:53:50.171 user 0m0.158s 00:53:50.171 sys 0m0.146s 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:53:50.171 ************************************ 00:53:50.171 END TEST dma 00:53:50.171 ************************************ 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:53:50.171 ************************************ 00:53:50.171 START TEST nvmf_identify 00:53:50.171 ************************************ 00:53:50.171 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:53:50.431 * Looking for test storage... 00:53:50.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:50.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:50.431 --rc genhtml_branch_coverage=1 00:53:50.431 --rc genhtml_function_coverage=1 00:53:50.431 --rc genhtml_legend=1 00:53:50.431 --rc geninfo_all_blocks=1 00:53:50.431 --rc geninfo_unexecuted_blocks=1 00:53:50.431 00:53:50.431 ' 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:50.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:50.431 --rc genhtml_branch_coverage=1 00:53:50.431 --rc genhtml_function_coverage=1 00:53:50.431 --rc genhtml_legend=1 00:53:50.431 --rc geninfo_all_blocks=1 00:53:50.431 --rc geninfo_unexecuted_blocks=1 00:53:50.431 00:53:50.431 ' 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:50.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:50.431 --rc genhtml_branch_coverage=1 00:53:50.431 --rc genhtml_function_coverage=1 00:53:50.431 --rc genhtml_legend=1 00:53:50.431 --rc geninfo_all_blocks=1 00:53:50.431 --rc geninfo_unexecuted_blocks=1 00:53:50.431 00:53:50.431 ' 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:50.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:50.431 --rc genhtml_branch_coverage=1 00:53:50.431 --rc genhtml_function_coverage=1 00:53:50.431 --rc genhtml_legend=1 00:53:50.431 --rc geninfo_all_blocks=1 00:53:50.431 --rc geninfo_unexecuted_blocks=1 00:53:50.431 00:53:50.431 ' 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:50.431 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:50.432 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:50.432 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:50.692 Cannot find device "nvmf_init_br" 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:50.692 Cannot find device "nvmf_init_br2" 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:50.692 Cannot find device "nvmf_tgt_br" 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:50.692 Cannot find device "nvmf_tgt_br2" 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:50.692 Cannot find device "nvmf_init_br" 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:50.692 Cannot find device "nvmf_init_br2" 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:50.692 Cannot find device "nvmf_tgt_br" 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:50.692 Cannot find device "nvmf_tgt_br2" 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:50.692 Cannot find device "nvmf_br" 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:50.692 Cannot find device "nvmf_init_if" 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:50.692 Cannot find device "nvmf_init_if2" 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:50.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:50.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:50.692 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:51.007 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:51.008 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:51.008 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.142 ms 00:53:51.008 00:53:51.008 --- 10.0.0.3 ping statistics --- 00:53:51.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:51.008 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:51.008 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:51.008 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:53:51.008 00:53:51.008 --- 10.0.0.4 ping statistics --- 00:53:51.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:51.008 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:51.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:51.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:53:51.008 00:53:51.008 --- 10.0.0.1 ping statistics --- 00:53:51.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:51.008 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:51.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:51.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:53:51.008 00:53:51.008 --- 10.0.0.2 ping statistics --- 00:53:51.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:51.008 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87907 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:53:51.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87907 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 87907 ']' 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:51.008 10:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:53:51.008 [2024-12-09 10:04:58.002032] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:51.008 [2024-12-09 10:04:58.002171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:51.282 [2024-12-09 10:04:58.151545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:53:51.282 [2024-12-09 10:04:58.231062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:51.282 [2024-12-09 10:04:58.231127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:51.282 [2024-12-09 10:04:58.231135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:51.282 [2024-12-09 10:04:58.231140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:51.282 [2024-12-09 10:04:58.231145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:51.282 [2024-12-09 10:04:58.232511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:51.282 [2024-12-09 10:04:58.232613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:51.282 [2024-12-09 10:04:58.232727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:51.283 [2024-12-09 10:04:58.232726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:53:52.222 10:04:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:52.222 10:04:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:53:52.222 10:04:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:53:52.222 10:04:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.222 10:04:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:53:52.222 [2024-12-09 10:04:58.918305] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:52.222 10:04:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.222 10:04:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:53:52.222 10:04:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:52.222 10:04:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:53:52.222 10:04:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:53:52.222 10:04:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.222 10:04:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:53:52.222 Malloc0 00:53:52.222 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.222 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:53:52.222 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.222 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:53:52.222 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.222 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:53:52.222 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.222 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:53:52.222 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.222 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:53:52.222 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.222 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:53:52.222 [2024-12-09 10:04:59.061387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:52.223 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.223 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:53:52.223 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.223 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:53:52.223 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.223 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:53:52.223 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.223 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:53:52.223 [ 00:53:52.223 { 00:53:52.223 "allow_any_host": true, 00:53:52.223 "hosts": [], 00:53:52.223 "listen_addresses": [ 00:53:52.223 { 00:53:52.223 "adrfam": "IPv4", 00:53:52.223 "traddr": "10.0.0.3", 00:53:52.223 "trsvcid": "4420", 00:53:52.223 "trtype": "TCP" 00:53:52.223 } 00:53:52.223 ], 00:53:52.223 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:53:52.223 "subtype": "Discovery" 00:53:52.223 }, 00:53:52.223 { 00:53:52.223 "allow_any_host": true, 00:53:52.223 "hosts": [], 00:53:52.223 "listen_addresses": [ 00:53:52.223 { 00:53:52.223 "adrfam": "IPv4", 00:53:52.223 "traddr": "10.0.0.3", 00:53:52.223 "trsvcid": "4420", 00:53:52.223 "trtype": "TCP" 00:53:52.223 } 00:53:52.223 ], 00:53:52.223 "max_cntlid": 65519, 00:53:52.223 "max_namespaces": 32, 00:53:52.223 "min_cntlid": 1, 00:53:52.223 "model_number": "SPDK bdev Controller", 00:53:52.223 "namespaces": [ 00:53:52.223 { 00:53:52.223 "bdev_name": "Malloc0", 00:53:52.223 "eui64": "ABCDEF0123456789", 00:53:52.223 "name": "Malloc0", 00:53:52.223 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:53:52.223 "nsid": 1, 00:53:52.223 "uuid": "04035ba5-8e14-43fc-9f08-b0ea25d7bd23" 00:53:52.223 } 00:53:52.223 ], 00:53:52.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:53:52.223 "serial_number": "SPDK00000000000001", 00:53:52.223 "subtype": "NVMe" 00:53:52.223 } 00:53:52.223 ] 00:53:52.223 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.223 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:53:52.223 [2024-12-09 10:04:59.132286] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:52.223 [2024-12-09 10:04:59.132403] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87960 ] 00:53:52.486 [2024-12-09 10:04:59.284910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:53:52.486 [2024-12-09 10:04:59.285000] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:53:52.486 [2024-12-09 10:04:59.285005] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:53:52.486 [2024-12-09 10:04:59.285027] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:53:52.486 [2024-12-09 10:04:59.285046] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:53:52.486 [2024-12-09 10:04:59.285472] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:53:52.486 [2024-12-09 10:04:59.285549] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2032d90 0 00:53:52.486 [2024-12-09 10:04:59.298502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:53:52.486 [2024-12-09 10:04:59.298531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:53:52.486 [2024-12-09 10:04:59.298536] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:53:52.486 [2024-12-09 10:04:59.298539] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:53:52.486 [2024-12-09 10:04:59.298583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.486 [2024-12-09 10:04:59.298589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.486 [2024-12-09 10:04:59.298592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2032d90) 00:53:52.486 [2024-12-09 10:04:59.298609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:53:52.486 [2024-12-09 10:04:59.298649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073600, cid 0, qid 0 00:53:52.486 [2024-12-09 10:04:59.306496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.486 [2024-12-09 10:04:59.306514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.486 [2024-12-09 10:04:59.306518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.486 [2024-12-09 10:04:59.306522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073600) on tqpair=0x2032d90 00:53:52.486 [2024-12-09 10:04:59.306534] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:53:52.486 [2024-12-09 10:04:59.306545] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:53:52.486 [2024-12-09 10:04:59.306550] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:53:52.486 [2024-12-09 10:04:59.306572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.486 [2024-12-09 10:04:59.306576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.486 [2024-12-09 10:04:59.306579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2032d90) 00:53:52.486 [2024-12-09 10:04:59.306591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.486 [2024-12-09 10:04:59.306621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073600, cid 0, qid 0 00:53:52.486 [2024-12-09 10:04:59.306669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.486 [2024-12-09 10:04:59.306675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.486 [2024-12-09 10:04:59.306678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.486 [2024-12-09 10:04:59.306681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073600) on tqpair=0x2032d90 00:53:52.486 [2024-12-09 10:04:59.306686] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:53:52.486 [2024-12-09 10:04:59.306692] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:53:52.486 [2024-12-09 10:04:59.306698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.486 [2024-12-09 10:04:59.306701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.486 [2024-12-09 10:04:59.306704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2032d90) 00:53:52.486 [2024-12-09 10:04:59.306710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.486 [2024-12-09 10:04:59.306726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073600, cid 0, qid 0 00:53:52.487 [2024-12-09 10:04:59.306770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.487 [2024-12-09 10:04:59.306775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.487 [2024-12-09 10:04:59.306778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.306781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073600) on tqpair=0x2032d90 00:53:52.487 [2024-12-09 10:04:59.306787] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:53:52.487 [2024-12-09 10:04:59.306793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:53:52.487 [2024-12-09 10:04:59.306799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.306802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.306804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2032d90) 00:53:52.487 [2024-12-09 10:04:59.306810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.487 [2024-12-09 10:04:59.306825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073600, cid 0, qid 0 00:53:52.487 [2024-12-09 10:04:59.306868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.487 [2024-12-09 10:04:59.306874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.487 [2024-12-09 10:04:59.306877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.306880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073600) on tqpair=0x2032d90 00:53:52.487 [2024-12-09 10:04:59.306885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:53:52.487 [2024-12-09 10:04:59.306893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.306896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.306899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2032d90) 00:53:52.487 [2024-12-09 10:04:59.306905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.487 [2024-12-09 10:04:59.306920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073600, cid 0, qid 0 00:53:52.487 [2024-12-09 10:04:59.306965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.487 [2024-12-09 10:04:59.306970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.487 [2024-12-09 10:04:59.306973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.306976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073600) on tqpair=0x2032d90 00:53:52.487 [2024-12-09 10:04:59.306980] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:53:52.487 [2024-12-09 10:04:59.306985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:53:52.487 [2024-12-09 10:04:59.306992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:53:52.487 [2024-12-09 10:04:59.307098] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:53:52.487 [2024-12-09 10:04:59.307108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:53:52.487 [2024-12-09 10:04:59.307119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2032d90) 00:53:52.487 [2024-12-09 10:04:59.307131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.487 [2024-12-09 10:04:59.307149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073600, cid 0, qid 0 00:53:52.487 [2024-12-09 10:04:59.307198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.487 [2024-12-09 10:04:59.307204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.487 [2024-12-09 10:04:59.307207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073600) on tqpair=0x2032d90 00:53:52.487 [2024-12-09 10:04:59.307215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:53:52.487 [2024-12-09 10:04:59.307222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2032d90) 00:53:52.487 [2024-12-09 10:04:59.307235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.487 [2024-12-09 10:04:59.307249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073600, cid 0, qid 0 00:53:52.487 [2024-12-09 10:04:59.307301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.487 [2024-12-09 10:04:59.307306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.487 [2024-12-09 10:04:59.307309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073600) on tqpair=0x2032d90 00:53:52.487 [2024-12-09 10:04:59.307316] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:53:52.487 [2024-12-09 10:04:59.307321] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:53:52.487 [2024-12-09 10:04:59.307327] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:53:52.487 [2024-12-09 10:04:59.307336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:53:52.487 [2024-12-09 10:04:59.307348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2032d90) 00:53:52.487 [2024-12-09 10:04:59.307357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.487 [2024-12-09 10:04:59.307373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073600, cid 0, qid 0 00:53:52.487 [2024-12-09 10:04:59.307453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.487 [2024-12-09 10:04:59.307464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.487 [2024-12-09 10:04:59.307468] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307471] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2032d90): datao=0, datal=4096, cccid=0 00:53:52.487 [2024-12-09 10:04:59.307489] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2073600) on tqpair(0x2032d90): expected_datao=0, payload_size=4096 00:53:52.487 [2024-12-09 10:04:59.307493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307502] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307506] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.487 [2024-12-09 10:04:59.307520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.487 [2024-12-09 10:04:59.307522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073600) on tqpair=0x2032d90 00:53:52.487 [2024-12-09 10:04:59.307535] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:53:52.487 [2024-12-09 10:04:59.307541] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:53:52.487 [2024-12-09 10:04:59.307545] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:53:52.487 [2024-12-09 10:04:59.307550] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:53:52.487 [2024-12-09 10:04:59.307559] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:53:52.487 [2024-12-09 10:04:59.307564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:53:52.487 [2024-12-09 10:04:59.307571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:53:52.487 [2024-12-09 10:04:59.307578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2032d90) 00:53:52.487 [2024-12-09 10:04:59.307591] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:53:52.487 [2024-12-09 10:04:59.307608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073600, cid 0, qid 0 00:53:52.487 [2024-12-09 10:04:59.307655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.487 [2024-12-09 10:04:59.307660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.487 [2024-12-09 10:04:59.307663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073600) on tqpair=0x2032d90 00:53:52.487 [2024-12-09 10:04:59.307679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2032d90) 00:53:52.487 [2024-12-09 10:04:59.307692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:53:52.487 [2024-12-09 10:04:59.307698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2032d90) 00:53:52.487 [2024-12-09 10:04:59.307711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:53:52.487 [2024-12-09 10:04:59.307716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2032d90) 00:53:52.487 [2024-12-09 10:04:59.307726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:53:52.487 [2024-12-09 10:04:59.307731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.487 [2024-12-09 10:04:59.307736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.488 [2024-12-09 10:04:59.307741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:53:52.488 [2024-12-09 10:04:59.307747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:53:52.488 [2024-12-09 10:04:59.307755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:53:52.488 [2024-12-09 10:04:59.307760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.307763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2032d90) 00:53:52.488 [2024-12-09 10:04:59.307769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.488 [2024-12-09 10:04:59.307786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073600, cid 0, qid 0 00:53:52.488 [2024-12-09 10:04:59.307790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073780, cid 1, qid 0 00:53:52.488 [2024-12-09 10:04:59.307795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073900, cid 2, qid 0 00:53:52.488 [2024-12-09 10:04:59.307799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.488 [2024-12-09 10:04:59.307803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073c00, cid 4, qid 0 00:53:52.488 [2024-12-09 10:04:59.307880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.488 [2024-12-09 10:04:59.307886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.488 [2024-12-09 10:04:59.307889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.307892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073c00) on tqpair=0x2032d90 00:53:52.488 [2024-12-09 10:04:59.307901] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:53:52.488 [2024-12-09 10:04:59.307905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:53:52.488 [2024-12-09 10:04:59.307914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.307917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2032d90) 00:53:52.488 [2024-12-09 10:04:59.307923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.488 [2024-12-09 10:04:59.307939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073c00, cid 4, qid 0 00:53:52.488 [2024-12-09 10:04:59.307988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.488 [2024-12-09 10:04:59.307994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.488 [2024-12-09 10:04:59.307997] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308000] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2032d90): datao=0, datal=4096, cccid=4 00:53:52.488 [2024-12-09 10:04:59.308003] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2073c00) on tqpair(0x2032d90): expected_datao=0, payload_size=4096 00:53:52.488 [2024-12-09 10:04:59.308007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308014] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308017] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.488 [2024-12-09 10:04:59.308029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.488 [2024-12-09 10:04:59.308032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073c00) on tqpair=0x2032d90 00:53:52.488 [2024-12-09 10:04:59.308048] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:53:52.488 [2024-12-09 10:04:59.308088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2032d90) 00:53:52.488 [2024-12-09 10:04:59.308098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.488 [2024-12-09 10:04:59.308105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2032d90) 00:53:52.488 [2024-12-09 10:04:59.308117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:53:52.488 [2024-12-09 10:04:59.308136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073c00, cid 4, qid 0 00:53:52.488 [2024-12-09 10:04:59.308142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073d80, cid 5, qid 0 00:53:52.488 [2024-12-09 10:04:59.308224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.488 [2024-12-09 10:04:59.308235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.488 [2024-12-09 10:04:59.308239] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308242] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2032d90): datao=0, datal=1024, cccid=4 00:53:52.488 [2024-12-09 10:04:59.308246] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2073c00) on tqpair(0x2032d90): expected_datao=0, payload_size=1024 00:53:52.488 [2024-12-09 10:04:59.308249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308255] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308258] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.488 [2024-12-09 10:04:59.308268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.488 [2024-12-09 10:04:59.308270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.308273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073d80) on tqpair=0x2032d90 00:53:52.488 [2024-12-09 10:04:59.352497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.488 [2024-12-09 10:04:59.352531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.488 [2024-12-09 10:04:59.352535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.352539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073c00) on tqpair=0x2032d90 00:53:52.488 [2024-12-09 10:04:59.352566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.352569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2032d90) 00:53:52.488 [2024-12-09 10:04:59.352582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.488 [2024-12-09 10:04:59.352626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073c00, cid 4, qid 0 00:53:52.488 [2024-12-09 10:04:59.352692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.488 [2024-12-09 10:04:59.352697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.488 [2024-12-09 10:04:59.352699] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.352702] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2032d90): datao=0, datal=3072, cccid=4 00:53:52.488 [2024-12-09 10:04:59.352706] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2073c00) on tqpair(0x2032d90): expected_datao=0, payload_size=3072 00:53:52.488 [2024-12-09 10:04:59.352710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.352717] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.352720] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.352727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.488 [2024-12-09 10:04:59.352731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.488 [2024-12-09 10:04:59.352733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.352736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073c00) on tqpair=0x2032d90 00:53:52.488 [2024-12-09 10:04:59.352743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.352745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2032d90) 00:53:52.488 [2024-12-09 10:04:59.352751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.488 [2024-12-09 10:04:59.352769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073c00, cid 4, qid 0 00:53:52.488 [2024-12-09 10:04:59.352839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.488 [2024-12-09 10:04:59.352844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.488 [2024-12-09 10:04:59.352847] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.352849] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2032d90): datao=0, datal=8, cccid=4 00:53:52.488 [2024-12-09 10:04:59.352852] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2073c00) on tqpair(0x2032d90): expected_datao=0, payload_size=8 00:53:52.488 [2024-12-09 10:04:59.352855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.352859] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.488 [2024-12-09 10:04:59.352862] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.488 ===================================================== 00:53:52.488 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:53:52.488 ===================================================== 00:53:52.488 Controller Capabilities/Features 00:53:52.488 ================================ 00:53:52.488 Vendor ID: 0000 00:53:52.488 Subsystem Vendor ID: 0000 00:53:52.488 Serial Number: .................... 00:53:52.488 Model Number: ........................................ 00:53:52.488 Firmware Version: 25.01 00:53:52.488 Recommended Arb Burst: 0 00:53:52.488 IEEE OUI Identifier: 00 00 00 00:53:52.488 Multi-path I/O 00:53:52.488 May have multiple subsystem ports: No 00:53:52.488 May have multiple controllers: No 00:53:52.488 Associated with SR-IOV VF: No 00:53:52.488 Max Data Transfer Size: 131072 00:53:52.488 Max Number of Namespaces: 0 00:53:52.488 Max Number of I/O Queues: 1024 00:53:52.488 NVMe Specification Version (VS): 1.3 00:53:52.488 NVMe Specification Version (Identify): 1.3 00:53:52.488 Maximum Queue Entries: 128 00:53:52.488 Contiguous Queues Required: Yes 00:53:52.489 Arbitration Mechanisms Supported 00:53:52.489 Weighted Round Robin: Not Supported 00:53:52.489 Vendor Specific: Not Supported 00:53:52.489 Reset Timeout: 15000 ms 00:53:52.489 Doorbell Stride: 4 bytes 00:53:52.489 NVM Subsystem Reset: Not Supported 00:53:52.489 Command Sets Supported 00:53:52.489 NVM Command Set: Supported 00:53:52.489 Boot Partition: Not Supported 00:53:52.489 Memory Page Size Minimum: 4096 bytes 00:53:52.489 Memory Page Size Maximum: 4096 bytes 00:53:52.489 Persistent Memory Region: Not Supported 00:53:52.489 Optional Asynchronous Events Supported 00:53:52.489 Namespace Attribute Notices: Not Supported 00:53:52.489 Firmware Activation Notices: Not Supported 00:53:52.489 ANA Change Notices: Not Supported 00:53:52.489 PLE Aggregate Log Change Notices: Not Supported 00:53:52.489 LBA Status Info Alert Notices: Not Supported 00:53:52.489 EGE Aggregate Log Change Notices: Not Supported 00:53:52.489 Normal NVM Subsystem Shutdown event: Not Supported 00:53:52.489 Zone Descriptor Change Notices: Not Supported 00:53:52.489 Discovery Log Change Notices: Supported 00:53:52.489 Controller Attributes 00:53:52.489 128-bit Host Identifier: Not Supported 00:53:52.489 Non-Operational Permissive Mode: Not Supported 00:53:52.489 NVM Sets: Not Supported 00:53:52.489 Read Recovery Levels: Not Supported 00:53:52.489 Endurance Groups: Not Supported 00:53:52.489 Predictable Latency Mode: Not Supported 00:53:52.489 Traffic Based Keep ALive: Not Supported 00:53:52.489 Namespace Granularity: Not Supported 00:53:52.489 SQ Associations: Not Supported 00:53:52.489 UUID List: Not Supported 00:53:52.489 Multi-Domain Subsystem: Not Supported 00:53:52.489 Fixed Capacity Management: Not Supported 00:53:52.489 Variable Capacity Management: Not Supported 00:53:52.489 Delete Endurance Group: Not Supported 00:53:52.489 Delete NVM Set: Not Supported 00:53:52.489 Extended LBA Formats Supported: Not Supported 00:53:52.489 Flexible Data Placement Supported: Not Supported 00:53:52.489 00:53:52.489 Controller Memory Buffer Support 00:53:52.489 ================================ 00:53:52.489 Supported: No 00:53:52.489 00:53:52.489 Persistent Memory Region Support 00:53:52.489 ================================ 00:53:52.489 Supported: No 00:53:52.489 00:53:52.489 Admin Command Set Attributes 00:53:52.489 ============================ 00:53:52.489 Security Send/Receive: Not Supported 00:53:52.489 Format NVM: Not Supported 00:53:52.489 Firmware Activate/Download: Not Supported 00:53:52.489 Namespace Management: Not Supported 00:53:52.489 Device Self-Test: Not Supported 00:53:52.489 Directives: Not Supported 00:53:52.489 NVMe-MI: Not Supported 00:53:52.489 Virtualization Management: Not Supported 00:53:52.489 Doorbell Buffer Config: Not Supported 00:53:52.489 Get LBA Status Capability: Not Supported 00:53:52.489 Command & Feature Lockdown Capability: Not Supported 00:53:52.489 Abort Command Limit: 1 00:53:52.489 Async Event Request Limit: 4 00:53:52.489 Number of Firmware Slots: N/A 00:53:52.489 Firmware Slot 1 Read-Only: N/A 00:53:52.489 Firmware Activation Without Reset: N/A 00:53:52.489 Multiple Update Detection Support: N/A 00:53:52.489 Firmware Update Granularity: No Information Provided 00:53:52.489 Per-Namespace SMART Log: No 00:53:52.489 Asymmetric Namespace Access Log Page: Not Supported 00:53:52.489 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:53:52.489 Command Effects Log Page: Not Supported 00:53:52.489 Get Log Page Extended Data: Supported 00:53:52.489 Telemetry Log Pages: Not Supported 00:53:52.489 Persistent Event Log Pages: Not Supported 00:53:52.489 Supported Log Pages Log Page: May Support 00:53:52.489 Commands Supported & Effects Log Page: Not Supported 00:53:52.489 Feature Identifiers & Effects Log Page:May Support 00:53:52.489 NVMe-MI Commands & Effects Log Page: May Support 00:53:52.489 Data Area 4 for Telemetry Log: Not Supported 00:53:52.489 Error Log Page Entries Supported: 128 00:53:52.489 Keep Alive: Not Supported 00:53:52.489 00:53:52.489 NVM Command Set Attributes 00:53:52.489 ========================== 00:53:52.489 Submission Queue Entry Size 00:53:52.489 Max: 1 00:53:52.489 Min: 1 00:53:52.489 Completion Queue Entry Size 00:53:52.489 Max: 1 00:53:52.489 Min: 1 00:53:52.489 Number of Namespaces: 0 00:53:52.489 Compare Command: Not Supported 00:53:52.489 Write Uncorrectable Command: Not Supported 00:53:52.489 Dataset Management Command: Not Supported 00:53:52.489 Write Zeroes Command: Not Supported 00:53:52.489 Set Features Save Field: Not Supported 00:53:52.489 Reservations: Not Supported 00:53:52.489 Timestamp: Not Supported 00:53:52.489 Copy: Not Supported 00:53:52.489 Volatile Write Cache: Not Present 00:53:52.489 Atomic Write Unit (Normal): 1 00:53:52.489 Atomic Write Unit (PFail): 1 00:53:52.489 Atomic Compare & Write Unit: 1 00:53:52.489 Fused Compare & Write: Supported 00:53:52.489 Scatter-Gather List 00:53:52.489 SGL Command Set: Supported 00:53:52.489 SGL Keyed: Supported 00:53:52.489 SGL Bit Bucket Descriptor: Not Supported 00:53:52.489 SGL Metadata Pointer: Not Supported 00:53:52.489 Oversized SGL: Not Supported 00:53:52.489 SGL Metadata Address: Not Supported 00:53:52.489 SGL Offset: Supported 00:53:52.489 Transport SGL Data Block: Not Supported 00:53:52.489 Replay Protected Memory Block: Not Supported 00:53:52.489 00:53:52.489 Firmware Slot Information 00:53:52.489 ========================= 00:53:52.489 Active slot: 0 00:53:52.489 00:53:52.489 00:53:52.489 Error Log 00:53:52.489 ========= 00:53:52.489 00:53:52.489 Active Namespaces 00:53:52.489 ================= 00:53:52.489 Discovery Log Page 00:53:52.489 ================== 00:53:52.489 Generation Counter: 2 00:53:52.489 Number of Records: 2 00:53:52.489 Record Format: 0 00:53:52.489 00:53:52.489 Discovery Log Entry 0 00:53:52.489 ---------------------- 00:53:52.489 Transport Type: 3 (TCP) 00:53:52.489 Address Family: 1 (IPv4) 00:53:52.489 Subsystem Type: 3 (Current Discovery Subsystem) 00:53:52.489 Entry Flags: 00:53:52.489 Duplicate Returned Information: 1 00:53:52.489 Explicit Persistent Connection Support for Discovery: 1 00:53:52.489 Transport Requirements: 00:53:52.489 Secure Channel: Not Required 00:53:52.489 Port ID: 0 (0x0000) 00:53:52.489 Controller ID: 65535 (0xffff) 00:53:52.489 Admin Max SQ Size: 128 00:53:52.489 Transport Service Identifier: 4420 00:53:52.489 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:53:52.489 Transport Address: 10.0.0.3 00:53:52.489 Discovery Log Entry 1 00:53:52.489 ---------------------- 00:53:52.489 Transport Type: 3 (TCP) 00:53:52.489 Address Family: 1 (IPv4) 00:53:52.489 Subsystem Type: 2 (NVM Subsystem) 00:53:52.489 Entry Flags: 00:53:52.489 Duplicate Returned Information: 0 00:53:52.489 Explicit Persistent Connection Support for Discovery: 0 00:53:52.489 Transport Requirements: 00:53:52.489 Secure Channel: Not Required 00:53:52.489 Port ID: 0 (0x0000) 00:53:52.489 Controller ID: 65535 (0xffff) 00:53:52.489 Admin Max SQ Size: 128 00:53:52.489 Transport Service Identifier: 4420 00:53:52.489 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:53:52.489 Transport Address: 10.0.0.3 [2024-12-09 10:04:59.394551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.489 [2024-12-09 10:04:59.394581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.489 [2024-12-09 10:04:59.394584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.489 [2024-12-09 10:04:59.394588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073c00) on tqpair=0x2032d90 00:53:52.489 [2024-12-09 10:04:59.394740] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:53:52.489 [2024-12-09 10:04:59.394755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073600) on tqpair=0x2032d90 00:53:52.489 [2024-12-09 10:04:59.394763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:52.489 [2024-12-09 10:04:59.394768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073780) on tqpair=0x2032d90 00:53:52.489 [2024-12-09 10:04:59.394771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:52.489 [2024-12-09 10:04:59.394775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073900) on tqpair=0x2032d90 00:53:52.489 [2024-12-09 10:04:59.394779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:52.489 [2024-12-09 10:04:59.394783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.489 [2024-12-09 10:04:59.394786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:52.489 [2024-12-09 10:04:59.394801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.489 [2024-12-09 10:04:59.394804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.489 [2024-12-09 10:04:59.394807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.489 [2024-12-09 10:04:59.394816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.490 [2024-12-09 10:04:59.394840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.490 [2024-12-09 10:04:59.394907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.490 [2024-12-09 10:04:59.394912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.490 [2024-12-09 10:04:59.394915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.394918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.490 [2024-12-09 10:04:59.394928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.394931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.394933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.490 [2024-12-09 10:04:59.394938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.490 [2024-12-09 10:04:59.394954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.490 [2024-12-09 10:04:59.395043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.490 [2024-12-09 10:04:59.395063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.490 [2024-12-09 10:04:59.395066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.490 [2024-12-09 10:04:59.395073] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:53:52.490 [2024-12-09 10:04:59.395077] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:53:52.490 [2024-12-09 10:04:59.395084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.490 [2024-12-09 10:04:59.395096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.490 [2024-12-09 10:04:59.395110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.490 [2024-12-09 10:04:59.395155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.490 [2024-12-09 10:04:59.395161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.490 [2024-12-09 10:04:59.395163] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.490 [2024-12-09 10:04:59.395185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.490 [2024-12-09 10:04:59.395196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.490 [2024-12-09 10:04:59.395208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.490 [2024-12-09 10:04:59.395251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.490 [2024-12-09 10:04:59.395256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.490 [2024-12-09 10:04:59.395258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.490 [2024-12-09 10:04:59.395268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.490 [2024-12-09 10:04:59.395278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.490 [2024-12-09 10:04:59.395297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.490 [2024-12-09 10:04:59.395342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.490 [2024-12-09 10:04:59.395348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.490 [2024-12-09 10:04:59.395350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.490 [2024-12-09 10:04:59.395361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.490 [2024-12-09 10:04:59.395371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.490 [2024-12-09 10:04:59.395385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.490 [2024-12-09 10:04:59.395430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.490 [2024-12-09 10:04:59.395435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.490 [2024-12-09 10:04:59.395438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.490 [2024-12-09 10:04:59.395448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.490 [2024-12-09 10:04:59.395459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.490 [2024-12-09 10:04:59.395474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.490 [2024-12-09 10:04:59.395528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.490 [2024-12-09 10:04:59.395537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.490 [2024-12-09 10:04:59.395539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.490 [2024-12-09 10:04:59.395551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.490 [2024-12-09 10:04:59.395562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.490 [2024-12-09 10:04:59.395578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.490 [2024-12-09 10:04:59.395620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.490 [2024-12-09 10:04:59.395625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.490 [2024-12-09 10:04:59.395628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.490 [2024-12-09 10:04:59.395639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.490 [2024-12-09 10:04:59.395650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.490 [2024-12-09 10:04:59.395663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.490 [2024-12-09 10:04:59.395712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.490 [2024-12-09 10:04:59.395717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.490 [2024-12-09 10:04:59.395720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.490 [2024-12-09 10:04:59.395730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.490 [2024-12-09 10:04:59.395741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.490 [2024-12-09 10:04:59.395755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.490 [2024-12-09 10:04:59.395798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.490 [2024-12-09 10:04:59.395803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.490 [2024-12-09 10:04:59.395805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.490 [2024-12-09 10:04:59.395816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.490 [2024-12-09 10:04:59.395828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.490 [2024-12-09 10:04:59.395841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.490 [2024-12-09 10:04:59.395885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.490 [2024-12-09 10:04:59.395890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.490 [2024-12-09 10:04:59.395893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.490 [2024-12-09 10:04:59.395903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.490 [2024-12-09 10:04:59.395913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.490 [2024-12-09 10:04:59.395927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.490 [2024-12-09 10:04:59.395969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.490 [2024-12-09 10:04:59.395974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.490 [2024-12-09 10:04:59.395977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.490 [2024-12-09 10:04:59.395987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.490 [2024-12-09 10:04:59.395993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.491 [2024-12-09 10:04:59.395999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.491 [2024-12-09 10:04:59.396012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.491 [2024-12-09 10:04:59.396060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.491 [2024-12-09 10:04:59.396065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.491 [2024-12-09 10:04:59.396067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.491 [2024-12-09 10:04:59.396077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.491 [2024-12-09 10:04:59.396089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.491 [2024-12-09 10:04:59.396102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.491 [2024-12-09 10:04:59.396146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.491 [2024-12-09 10:04:59.396152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.491 [2024-12-09 10:04:59.396154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.491 [2024-12-09 10:04:59.396164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.491 [2024-12-09 10:04:59.396175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.491 [2024-12-09 10:04:59.396188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.491 [2024-12-09 10:04:59.396230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.491 [2024-12-09 10:04:59.396236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.491 [2024-12-09 10:04:59.396238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.491 [2024-12-09 10:04:59.396248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.491 [2024-12-09 10:04:59.396260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.491 [2024-12-09 10:04:59.396273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.491 [2024-12-09 10:04:59.396313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.491 [2024-12-09 10:04:59.396318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.491 [2024-12-09 10:04:59.396321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.491 [2024-12-09 10:04:59.396332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.491 [2024-12-09 10:04:59.396343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.491 [2024-12-09 10:04:59.396356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.491 [2024-12-09 10:04:59.396398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.491 [2024-12-09 10:04:59.396403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.491 [2024-12-09 10:04:59.396405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.491 [2024-12-09 10:04:59.396415] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.491 [2024-12-09 10:04:59.396426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.491 [2024-12-09 10:04:59.396439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.491 [2024-12-09 10:04:59.396490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.491 [2024-12-09 10:04:59.396496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.491 [2024-12-09 10:04:59.396499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.491 [2024-12-09 10:04:59.396509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.491 [2024-12-09 10:04:59.396531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.491 [2024-12-09 10:04:59.396545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.491 [2024-12-09 10:04:59.396587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.491 [2024-12-09 10:04:59.396591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.491 [2024-12-09 10:04:59.396594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.491 [2024-12-09 10:04:59.396603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.491 [2024-12-09 10:04:59.396614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.491 [2024-12-09 10:04:59.396627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.491 [2024-12-09 10:04:59.396669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.491 [2024-12-09 10:04:59.396674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.491 [2024-12-09 10:04:59.396676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.491 [2024-12-09 10:04:59.396685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.491 [2024-12-09 10:04:59.396695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.491 [2024-12-09 10:04:59.396707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.491 [2024-12-09 10:04:59.396753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.491 [2024-12-09 10:04:59.396757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.491 [2024-12-09 10:04:59.396759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.491 [2024-12-09 10:04:59.396768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.491 [2024-12-09 10:04:59.396779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.491 [2024-12-09 10:04:59.396791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.491 [2024-12-09 10:04:59.396838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.491 [2024-12-09 10:04:59.396843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.491 [2024-12-09 10:04:59.396845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.491 [2024-12-09 10:04:59.396854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.491 [2024-12-09 10:04:59.396859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.491 [2024-12-09 10:04:59.396864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.491 [2024-12-09 10:04:59.396876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.491 [2024-12-09 10:04:59.396922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.492 [2024-12-09 10:04:59.396927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.492 [2024-12-09 10:04:59.396929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.396931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.492 [2024-12-09 10:04:59.396938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.396940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.396942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.492 [2024-12-09 10:04:59.396949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.492 [2024-12-09 10:04:59.396977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.492 [2024-12-09 10:04:59.397017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.492 [2024-12-09 10:04:59.397022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.492 [2024-12-09 10:04:59.397025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.492 [2024-12-09 10:04:59.397034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.492 [2024-12-09 10:04:59.397045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.492 [2024-12-09 10:04:59.397058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.492 [2024-12-09 10:04:59.397105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.492 [2024-12-09 10:04:59.397110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.492 [2024-12-09 10:04:59.397112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.492 [2024-12-09 10:04:59.397122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.492 [2024-12-09 10:04:59.397133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.492 [2024-12-09 10:04:59.397146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.492 [2024-12-09 10:04:59.397193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.492 [2024-12-09 10:04:59.397198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.492 [2024-12-09 10:04:59.397201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.492 [2024-12-09 10:04:59.397212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.492 [2024-12-09 10:04:59.397223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.492 [2024-12-09 10:04:59.397236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.492 [2024-12-09 10:04:59.397289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.492 [2024-12-09 10:04:59.397293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.492 [2024-12-09 10:04:59.397296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.492 [2024-12-09 10:04:59.397305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.492 [2024-12-09 10:04:59.397316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.492 [2024-12-09 10:04:59.397328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.492 [2024-12-09 10:04:59.397371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.492 [2024-12-09 10:04:59.397375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.492 [2024-12-09 10:04:59.397377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.492 [2024-12-09 10:04:59.397386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.492 [2024-12-09 10:04:59.397396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.492 [2024-12-09 10:04:59.397407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.492 [2024-12-09 10:04:59.397454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.492 [2024-12-09 10:04:59.397459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.492 [2024-12-09 10:04:59.397461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.492 [2024-12-09 10:04:59.397470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.397475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.492 [2024-12-09 10:04:59.397479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.492 [2024-12-09 10:04:59.397491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.492 [2024-12-09 10:04:59.401515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.492 [2024-12-09 10:04:59.401528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.492 [2024-12-09 10:04:59.401531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.401534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.492 [2024-12-09 10:04:59.401543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.401547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.401549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2032d90) 00:53:52.492 [2024-12-09 10:04:59.401556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.492 [2024-12-09 10:04:59.401574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2073a80, cid 3, qid 0 00:53:52.492 [2024-12-09 10:04:59.401615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.492 [2024-12-09 10:04:59.401620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.492 [2024-12-09 10:04:59.401623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.492 [2024-12-09 10:04:59.401626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2073a80) on tqpair=0x2032d90 00:53:52.492 [2024-12-09 10:04:59.401632] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:53:52.492 00:53:52.492 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:53:52.492 [2024-12-09 10:04:59.450590] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:52.492 [2024-12-09 10:04:59.450658] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87963 ] 00:53:52.756 [2024-12-09 10:04:59.599828] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:53:52.756 [2024-12-09 10:04:59.599946] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:53:52.757 [2024-12-09 10:04:59.599957] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:53:52.757 [2024-12-09 10:04:59.599988] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:53:52.757 [2024-12-09 10:04:59.600010] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:53:52.757 [2024-12-09 10:04:59.604660] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:53:52.757 [2024-12-09 10:04:59.604771] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xee6d90 0 00:53:52.757 [2024-12-09 10:04:59.612527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:53:52.757 [2024-12-09 10:04:59.612561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:53:52.757 [2024-12-09 10:04:59.612569] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:53:52.757 [2024-12-09 10:04:59.612573] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:53:52.757 [2024-12-09 10:04:59.612622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.612632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.612636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee6d90) 00:53:52.757 [2024-12-09 10:04:59.612659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:53:52.757 [2024-12-09 10:04:59.612705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27600, cid 0, qid 0 00:53:52.757 [2024-12-09 10:04:59.620525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.757 [2024-12-09 10:04:59.620575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.757 [2024-12-09 10:04:59.620581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.620589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27600) on tqpair=0xee6d90 00:53:52.757 [2024-12-09 10:04:59.620611] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:53:52.757 [2024-12-09 10:04:59.620625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:53:52.757 [2024-12-09 10:04:59.620634] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:53:52.757 [2024-12-09 10:04:59.620669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.620675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.620680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee6d90) 00:53:52.757 [2024-12-09 10:04:59.620698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.757 [2024-12-09 10:04:59.620759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27600, cid 0, qid 0 00:53:52.757 [2024-12-09 10:04:59.620838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.757 [2024-12-09 10:04:59.620845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.757 [2024-12-09 10:04:59.620849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.620853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27600) on tqpair=0xee6d90 00:53:52.757 [2024-12-09 10:04:59.620860] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:53:52.757 [2024-12-09 10:04:59.620867] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:53:52.757 [2024-12-09 10:04:59.620875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.620880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.620883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee6d90) 00:53:52.757 [2024-12-09 10:04:59.620891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.757 [2024-12-09 10:04:59.620911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27600, cid 0, qid 0 00:53:52.757 [2024-12-09 10:04:59.620959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.757 [2024-12-09 10:04:59.620972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.757 [2024-12-09 10:04:59.620976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.620981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27600) on tqpair=0xee6d90 00:53:52.757 [2024-12-09 10:04:59.620989] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:53:52.757 [2024-12-09 10:04:59.621000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:53:52.757 [2024-12-09 10:04:59.621007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee6d90) 00:53:52.757 [2024-12-09 10:04:59.621022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.757 [2024-12-09 10:04:59.621041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27600, cid 0, qid 0 00:53:52.757 [2024-12-09 10:04:59.621091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.757 [2024-12-09 10:04:59.621100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.757 [2024-12-09 10:04:59.621104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27600) on tqpair=0xee6d90 00:53:52.757 [2024-12-09 10:04:59.621116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:53:52.757 [2024-12-09 10:04:59.621127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee6d90) 00:53:52.757 [2024-12-09 10:04:59.621142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.757 [2024-12-09 10:04:59.621162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27600, cid 0, qid 0 00:53:52.757 [2024-12-09 10:04:59.621225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.757 [2024-12-09 10:04:59.621238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.757 [2024-12-09 10:04:59.621242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27600) on tqpair=0xee6d90 00:53:52.757 [2024-12-09 10:04:59.621251] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:53:52.757 [2024-12-09 10:04:59.621257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:53:52.757 [2024-12-09 10:04:59.621264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:53:52.757 [2024-12-09 10:04:59.621372] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:53:52.757 [2024-12-09 10:04:59.621379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:53:52.757 [2024-12-09 10:04:59.621390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee6d90) 00:53:52.757 [2024-12-09 10:04:59.621405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.757 [2024-12-09 10:04:59.621426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27600, cid 0, qid 0 00:53:52.757 [2024-12-09 10:04:59.621495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.757 [2024-12-09 10:04:59.621502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.757 [2024-12-09 10:04:59.621506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27600) on tqpair=0xee6d90 00:53:52.757 [2024-12-09 10:04:59.621516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:53:52.757 [2024-12-09 10:04:59.621526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee6d90) 00:53:52.757 [2024-12-09 10:04:59.621541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.757 [2024-12-09 10:04:59.621563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27600, cid 0, qid 0 00:53:52.757 [2024-12-09 10:04:59.621621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.757 [2024-12-09 10:04:59.621628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.757 [2024-12-09 10:04:59.621631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27600) on tqpair=0xee6d90 00:53:52.757 [2024-12-09 10:04:59.621641] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:53:52.757 [2024-12-09 10:04:59.621646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:53:52.757 [2024-12-09 10:04:59.621653] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:53:52.757 [2024-12-09 10:04:59.621666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:53:52.757 [2024-12-09 10:04:59.621681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee6d90) 00:53:52.757 [2024-12-09 10:04:59.621692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.757 [2024-12-09 10:04:59.621712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27600, cid 0, qid 0 00:53:52.757 [2024-12-09 10:04:59.621850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.757 [2024-12-09 10:04:59.621870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.757 [2024-12-09 10:04:59.621875] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.757 [2024-12-09 10:04:59.621880] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee6d90): datao=0, datal=4096, cccid=0 00:53:52.757 [2024-12-09 10:04:59.621885] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf27600) on tqpair(0xee6d90): expected_datao=0, payload_size=4096 00:53:52.757 [2024-12-09 10:04:59.621890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.621900] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.621905] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.621915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.758 [2024-12-09 10:04:59.621921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.758 [2024-12-09 10:04:59.621924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.621929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27600) on tqpair=0xee6d90 00:53:52.758 [2024-12-09 10:04:59.621942] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:53:52.758 [2024-12-09 10:04:59.621948] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:53:52.758 [2024-12-09 10:04:59.621953] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:53:52.758 [2024-12-09 10:04:59.621958] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:53:52.758 [2024-12-09 10:04:59.621969] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:53:52.758 [2024-12-09 10:04:59.621975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:53:52.758 [2024-12-09 10:04:59.621985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:53:52.758 [2024-12-09 10:04:59.621992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.621997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee6d90) 00:53:52.758 [2024-12-09 10:04:59.622008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:53:52.758 [2024-12-09 10:04:59.622033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27600, cid 0, qid 0 00:53:52.758 [2024-12-09 10:04:59.622104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.758 [2024-12-09 10:04:59.622116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.758 [2024-12-09 10:04:59.622120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27600) on tqpair=0xee6d90 00:53:52.758 [2024-12-09 10:04:59.622139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee6d90) 00:53:52.758 [2024-12-09 10:04:59.622155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:53:52.758 [2024-12-09 10:04:59.622161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xee6d90) 00:53:52.758 [2024-12-09 10:04:59.622174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:53:52.758 [2024-12-09 10:04:59.622181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xee6d90) 00:53:52.758 [2024-12-09 10:04:59.622195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:53:52.758 [2024-12-09 10:04:59.622201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.758 [2024-12-09 10:04:59.622215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:53:52.758 [2024-12-09 10:04:59.622221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:53:52.758 [2024-12-09 10:04:59.622230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:53:52.758 [2024-12-09 10:04:59.622238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee6d90) 00:53:52.758 [2024-12-09 10:04:59.622248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.758 [2024-12-09 10:04:59.622273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27600, cid 0, qid 0 00:53:52.758 [2024-12-09 10:04:59.622279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27780, cid 1, qid 0 00:53:52.758 [2024-12-09 10:04:59.622284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27900, cid 2, qid 0 00:53:52.758 [2024-12-09 10:04:59.622290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.758 [2024-12-09 10:04:59.622295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27c00, cid 4, qid 0 00:53:52.758 [2024-12-09 10:04:59.622406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.758 [2024-12-09 10:04:59.622422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.758 [2024-12-09 10:04:59.622428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27c00) on tqpair=0xee6d90 00:53:52.758 [2024-12-09 10:04:59.622444] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:53:52.758 [2024-12-09 10:04:59.622450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:53:52.758 [2024-12-09 10:04:59.622459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:53:52.758 [2024-12-09 10:04:59.622466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:53:52.758 [2024-12-09 10:04:59.622487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee6d90) 00:53:52.758 [2024-12-09 10:04:59.622504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:53:52.758 [2024-12-09 10:04:59.622527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27c00, cid 4, qid 0 00:53:52.758 [2024-12-09 10:04:59.622579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.758 [2024-12-09 10:04:59.622586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.758 [2024-12-09 10:04:59.622589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27c00) on tqpair=0xee6d90 00:53:52.758 [2024-12-09 10:04:59.622661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:53:52.758 [2024-12-09 10:04:59.622678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:53:52.758 [2024-12-09 10:04:59.622688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee6d90) 00:53:52.758 [2024-12-09 10:04:59.622699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.758 [2024-12-09 10:04:59.622720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27c00, cid 4, qid 0 00:53:52.758 [2024-12-09 10:04:59.622787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.758 [2024-12-09 10:04:59.622798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.758 [2024-12-09 10:04:59.622801] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622806] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee6d90): datao=0, datal=4096, cccid=4 00:53:52.758 [2024-12-09 10:04:59.622811] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf27c00) on tqpair(0xee6d90): expected_datao=0, payload_size=4096 00:53:52.758 [2024-12-09 10:04:59.622815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622823] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622828] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.758 [2024-12-09 10:04:59.622843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.758 [2024-12-09 10:04:59.622847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27c00) on tqpair=0xee6d90 00:53:52.758 [2024-12-09 10:04:59.622875] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:53:52.758 [2024-12-09 10:04:59.622890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:53:52.758 [2024-12-09 10:04:59.622901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:53:52.758 [2024-12-09 10:04:59.622908] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.622912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee6d90) 00:53:52.758 [2024-12-09 10:04:59.622919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.758 [2024-12-09 10:04:59.622940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27c00, cid 4, qid 0 00:53:52.758 [2024-12-09 10:04:59.623035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.758 [2024-12-09 10:04:59.623048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.758 [2024-12-09 10:04:59.623052] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.623056] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee6d90): datao=0, datal=4096, cccid=4 00:53:52.758 [2024-12-09 10:04:59.623059] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf27c00) on tqpair(0xee6d90): expected_datao=0, payload_size=4096 00:53:52.758 [2024-12-09 10:04:59.623063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.623070] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.623074] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.623083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.758 [2024-12-09 10:04:59.623088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.758 [2024-12-09 10:04:59.623092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.758 [2024-12-09 10:04:59.623096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27c00) on tqpair=0xee6d90 00:53:52.759 [2024-12-09 10:04:59.623116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:53:52.759 [2024-12-09 10:04:59.623126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:53:52.759 [2024-12-09 10:04:59.623135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee6d90) 00:53:52.759 [2024-12-09 10:04:59.623146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.759 [2024-12-09 10:04:59.623168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27c00, cid 4, qid 0 00:53:52.759 [2024-12-09 10:04:59.623256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.759 [2024-12-09 10:04:59.623262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.759 [2024-12-09 10:04:59.623266] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623270] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee6d90): datao=0, datal=4096, cccid=4 00:53:52.759 [2024-12-09 10:04:59.623275] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf27c00) on tqpair(0xee6d90): expected_datao=0, payload_size=4096 00:53:52.759 [2024-12-09 10:04:59.623279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623286] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623298] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.759 [2024-12-09 10:04:59.623313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.759 [2024-12-09 10:04:59.623317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27c00) on tqpair=0xee6d90 00:53:52.759 [2024-12-09 10:04:59.623331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:53:52.759 [2024-12-09 10:04:59.623339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:53:52.759 [2024-12-09 10:04:59.623356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:53:52.759 [2024-12-09 10:04:59.623364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:53:52.759 [2024-12-09 10:04:59.623369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:53:52.759 [2024-12-09 10:04:59.623375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:53:52.759 [2024-12-09 10:04:59.623381] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:53:52.759 [2024-12-09 10:04:59.623386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:53:52.759 [2024-12-09 10:04:59.623392] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:53:52.759 [2024-12-09 10:04:59.623418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee6d90) 00:53:52.759 [2024-12-09 10:04:59.623429] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.759 [2024-12-09 10:04:59.623437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xee6d90) 00:53:52.759 [2024-12-09 10:04:59.623451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:53:52.759 [2024-12-09 10:04:59.623501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27c00, cid 4, qid 0 00:53:52.759 [2024-12-09 10:04:59.623509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27d80, cid 5, qid 0 00:53:52.759 [2024-12-09 10:04:59.623581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.759 [2024-12-09 10:04:59.623588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.759 [2024-12-09 10:04:59.623592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27c00) on tqpair=0xee6d90 00:53:52.759 [2024-12-09 10:04:59.623602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.759 [2024-12-09 10:04:59.623608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.759 [2024-12-09 10:04:59.623612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27d80) on tqpair=0xee6d90 00:53:52.759 [2024-12-09 10:04:59.623625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xee6d90) 00:53:52.759 [2024-12-09 10:04:59.623635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.759 [2024-12-09 10:04:59.623655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27d80, cid 5, qid 0 00:53:52.759 [2024-12-09 10:04:59.623720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.759 [2024-12-09 10:04:59.623728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.759 [2024-12-09 10:04:59.623732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27d80) on tqpair=0xee6d90 00:53:52.759 [2024-12-09 10:04:59.623744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xee6d90) 00:53:52.759 [2024-12-09 10:04:59.623755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.759 [2024-12-09 10:04:59.623773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27d80, cid 5, qid 0 00:53:52.759 [2024-12-09 10:04:59.623821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.759 [2024-12-09 10:04:59.623828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.759 [2024-12-09 10:04:59.623831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27d80) on tqpair=0xee6d90 00:53:52.759 [2024-12-09 10:04:59.623843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xee6d90) 00:53:52.759 [2024-12-09 10:04:59.623854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.759 [2024-12-09 10:04:59.623872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27d80, cid 5, qid 0 00:53:52.759 [2024-12-09 10:04:59.623938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.759 [2024-12-09 10:04:59.623946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.759 [2024-12-09 10:04:59.623950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27d80) on tqpair=0xee6d90 00:53:52.759 [2024-12-09 10:04:59.623974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xee6d90) 00:53:52.759 [2024-12-09 10:04:59.623986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.759 [2024-12-09 10:04:59.623994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.623998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee6d90) 00:53:52.759 [2024-12-09 10:04:59.624004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.759 [2024-12-09 10:04:59.624011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.624015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xee6d90) 00:53:52.759 [2024-12-09 10:04:59.624022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.759 [2024-12-09 10:04:59.624035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.624039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xee6d90) 00:53:52.759 [2024-12-09 10:04:59.624045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.759 [2024-12-09 10:04:59.624066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27d80, cid 5, qid 0 00:53:52.759 [2024-12-09 10:04:59.624072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27c00, cid 4, qid 0 00:53:52.759 [2024-12-09 10:04:59.624077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27f00, cid 6, qid 0 00:53:52.759 [2024-12-09 10:04:59.624082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf28080, cid 7, qid 0 00:53:52.759 [2024-12-09 10:04:59.624236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.759 [2024-12-09 10:04:59.624254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.759 [2024-12-09 10:04:59.624258] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.624262] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee6d90): datao=0, datal=8192, cccid=5 00:53:52.759 [2024-12-09 10:04:59.624267] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf27d80) on tqpair(0xee6d90): expected_datao=0, payload_size=8192 00:53:52.759 [2024-12-09 10:04:59.624272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.624290] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.624295] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.624300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.759 [2024-12-09 10:04:59.624306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.759 [2024-12-09 10:04:59.624309] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.624313] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee6d90): datao=0, datal=512, cccid=4 00:53:52.759 [2024-12-09 10:04:59.624318] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf27c00) on tqpair(0xee6d90): expected_datao=0, payload_size=512 00:53:52.759 [2024-12-09 10:04:59.624322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.624328] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.624332] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.759 [2024-12-09 10:04:59.624337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.759 [2024-12-09 10:04:59.624343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.759 [2024-12-09 10:04:59.624346] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.760 [2024-12-09 10:04:59.624350] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee6d90): datao=0, datal=512, cccid=6 00:53:52.760 [2024-12-09 10:04:59.624354] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf27f00) on tqpair(0xee6d90): expected_datao=0, payload_size=512 00:53:52.760 [2024-12-09 10:04:59.624359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.760 [2024-12-09 10:04:59.624365] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.760 [2024-12-09 10:04:59.624369] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.760 [2024-12-09 10:04:59.624374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:53:52.760 [2024-12-09 10:04:59.624380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:53:52.760 [2024-12-09 10:04:59.624383] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:53:52.760 [2024-12-09 10:04:59.624387] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee6d90): datao=0, datal=4096, cccid=7 00:53:52.760 [2024-12-09 10:04:59.624392] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf28080) on tqpair(0xee6d90): expected_datao=0, payload_size=4096 00:53:52.760 [2024-12-09 10:04:59.624396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.760 [2024-12-09 10:04:59.624403] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:53:52.760 [2024-12-09 10:04:59.624407] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:53:52.760 [2024-12-09 10:04:59.624412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.760 [2024-12-09 10:04:59.624418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.760 [2024-12-09 10:04:59.624421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.760 [2024-12-09 10:04:59.624425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27d80) on tqpair=0xee6d90 00:53:52.760 [2024-12-09 10:04:59.624444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.760 [2024-12-09 10:04:59.624450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.760 [2024-12-09 10:04:59.624454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.760 [2024-12-09 10:04:59.624458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27c00) on tqpair=0xee6d90 00:53:52.760 [2024-12-09 10:04:59.624471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.760 [2024-12-09 10:04:59.628525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.760 [2024-12-09 10:04:59.628545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.760 [2024-12-09 10:04:59.628550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27f00) on tqpair=0xee6d90 00:53:52.760 [2024-12-09 10:04:59.628563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.760 [2024-12-09 10:04:59.628585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.760 [2024-12-09 10:04:59.628589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.760 [2024-12-09 10:04:59.628594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf28080) on tqpair=0xee6d90 00:53:52.760 ===================================================== 00:53:52.760 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:53:52.760 ===================================================== 00:53:52.760 Controller Capabilities/Features 00:53:52.760 ================================ 00:53:52.760 Vendor ID: 8086 00:53:52.760 Subsystem Vendor ID: 8086 00:53:52.760 Serial Number: SPDK00000000000001 00:53:52.760 Model Number: SPDK bdev Controller 00:53:52.760 Firmware Version: 25.01 00:53:52.760 Recommended Arb Burst: 6 00:53:52.760 IEEE OUI Identifier: e4 d2 5c 00:53:52.760 Multi-path I/O 00:53:52.760 May have multiple subsystem ports: Yes 00:53:52.760 May have multiple controllers: Yes 00:53:52.760 Associated with SR-IOV VF: No 00:53:52.760 Max Data Transfer Size: 131072 00:53:52.760 Max Number of Namespaces: 32 00:53:52.760 Max Number of I/O Queues: 127 00:53:52.760 NVMe Specification Version (VS): 1.3 00:53:52.760 NVMe Specification Version (Identify): 1.3 00:53:52.760 Maximum Queue Entries: 128 00:53:52.760 Contiguous Queues Required: Yes 00:53:52.760 Arbitration Mechanisms Supported 00:53:52.760 Weighted Round Robin: Not Supported 00:53:52.760 Vendor Specific: Not Supported 00:53:52.760 Reset Timeout: 15000 ms 00:53:52.760 Doorbell Stride: 4 bytes 00:53:52.760 NVM Subsystem Reset: Not Supported 00:53:52.760 Command Sets Supported 00:53:52.760 NVM Command Set: Supported 00:53:52.760 Boot Partition: Not Supported 00:53:52.760 Memory Page Size Minimum: 4096 bytes 00:53:52.760 Memory Page Size Maximum: 4096 bytes 00:53:52.760 Persistent Memory Region: Not Supported 00:53:52.760 Optional Asynchronous Events Supported 00:53:52.760 Namespace Attribute Notices: Supported 00:53:52.760 Firmware Activation Notices: Not Supported 00:53:52.760 ANA Change Notices: Not Supported 00:53:52.760 PLE Aggregate Log Change Notices: Not Supported 00:53:52.760 LBA Status Info Alert Notices: Not Supported 00:53:52.760 EGE Aggregate Log Change Notices: Not Supported 00:53:52.760 Normal NVM Subsystem Shutdown event: Not Supported 00:53:52.760 Zone Descriptor Change Notices: Not Supported 00:53:52.760 Discovery Log Change Notices: Not Supported 00:53:52.760 Controller Attributes 00:53:52.760 128-bit Host Identifier: Supported 00:53:52.760 Non-Operational Permissive Mode: Not Supported 00:53:52.760 NVM Sets: Not Supported 00:53:52.760 Read Recovery Levels: Not Supported 00:53:52.760 Endurance Groups: Not Supported 00:53:52.760 Predictable Latency Mode: Not Supported 00:53:52.760 Traffic Based Keep ALive: Not Supported 00:53:52.760 Namespace Granularity: Not Supported 00:53:52.760 SQ Associations: Not Supported 00:53:52.760 UUID List: Not Supported 00:53:52.760 Multi-Domain Subsystem: Not Supported 00:53:52.760 Fixed Capacity Management: Not Supported 00:53:52.760 Variable Capacity Management: Not Supported 00:53:52.760 Delete Endurance Group: Not Supported 00:53:52.760 Delete NVM Set: Not Supported 00:53:52.760 Extended LBA Formats Supported: Not Supported 00:53:52.760 Flexible Data Placement Supported: Not Supported 00:53:52.760 00:53:52.760 Controller Memory Buffer Support 00:53:52.760 ================================ 00:53:52.760 Supported: No 00:53:52.760 00:53:52.760 Persistent Memory Region Support 00:53:52.760 ================================ 00:53:52.760 Supported: No 00:53:52.760 00:53:52.760 Admin Command Set Attributes 00:53:52.760 ============================ 00:53:52.760 Security Send/Receive: Not Supported 00:53:52.760 Format NVM: Not Supported 00:53:52.760 Firmware Activate/Download: Not Supported 00:53:52.760 Namespace Management: Not Supported 00:53:52.760 Device Self-Test: Not Supported 00:53:52.760 Directives: Not Supported 00:53:52.760 NVMe-MI: Not Supported 00:53:52.760 Virtualization Management: Not Supported 00:53:52.760 Doorbell Buffer Config: Not Supported 00:53:52.760 Get LBA Status Capability: Not Supported 00:53:52.760 Command & Feature Lockdown Capability: Not Supported 00:53:52.760 Abort Command Limit: 4 00:53:52.760 Async Event Request Limit: 4 00:53:52.760 Number of Firmware Slots: N/A 00:53:52.760 Firmware Slot 1 Read-Only: N/A 00:53:52.760 Firmware Activation Without Reset: N/A 00:53:52.760 Multiple Update Detection Support: N/A 00:53:52.760 Firmware Update Granularity: No Information Provided 00:53:52.760 Per-Namespace SMART Log: No 00:53:52.760 Asymmetric Namespace Access Log Page: Not Supported 00:53:52.760 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:53:52.760 Command Effects Log Page: Supported 00:53:52.760 Get Log Page Extended Data: Supported 00:53:52.760 Telemetry Log Pages: Not Supported 00:53:52.760 Persistent Event Log Pages: Not Supported 00:53:52.760 Supported Log Pages Log Page: May Support 00:53:52.760 Commands Supported & Effects Log Page: Not Supported 00:53:52.760 Feature Identifiers & Effects Log Page:May Support 00:53:52.760 NVMe-MI Commands & Effects Log Page: May Support 00:53:52.760 Data Area 4 for Telemetry Log: Not Supported 00:53:52.760 Error Log Page Entries Supported: 128 00:53:52.760 Keep Alive: Supported 00:53:52.760 Keep Alive Granularity: 10000 ms 00:53:52.760 00:53:52.760 NVM Command Set Attributes 00:53:52.760 ========================== 00:53:52.760 Submission Queue Entry Size 00:53:52.760 Max: 64 00:53:52.760 Min: 64 00:53:52.760 Completion Queue Entry Size 00:53:52.760 Max: 16 00:53:52.760 Min: 16 00:53:52.760 Number of Namespaces: 32 00:53:52.760 Compare Command: Supported 00:53:52.760 Write Uncorrectable Command: Not Supported 00:53:52.760 Dataset Management Command: Supported 00:53:52.760 Write Zeroes Command: Supported 00:53:52.760 Set Features Save Field: Not Supported 00:53:52.760 Reservations: Supported 00:53:52.760 Timestamp: Not Supported 00:53:52.760 Copy: Supported 00:53:52.760 Volatile Write Cache: Present 00:53:52.760 Atomic Write Unit (Normal): 1 00:53:52.760 Atomic Write Unit (PFail): 1 00:53:52.760 Atomic Compare & Write Unit: 1 00:53:52.760 Fused Compare & Write: Supported 00:53:52.760 Scatter-Gather List 00:53:52.760 SGL Command Set: Supported 00:53:52.760 SGL Keyed: Supported 00:53:52.760 SGL Bit Bucket Descriptor: Not Supported 00:53:52.760 SGL Metadata Pointer: Not Supported 00:53:52.760 Oversized SGL: Not Supported 00:53:52.760 SGL Metadata Address: Not Supported 00:53:52.760 SGL Offset: Supported 00:53:52.760 Transport SGL Data Block: Not Supported 00:53:52.760 Replay Protected Memory Block: Not Supported 00:53:52.760 00:53:52.760 Firmware Slot Information 00:53:52.760 ========================= 00:53:52.760 Active slot: 1 00:53:52.760 Slot 1 Firmware Revision: 25.01 00:53:52.760 00:53:52.760 00:53:52.760 Commands Supported and Effects 00:53:52.760 ============================== 00:53:52.760 Admin Commands 00:53:52.760 -------------- 00:53:52.760 Get Log Page (02h): Supported 00:53:52.760 Identify (06h): Supported 00:53:52.761 Abort (08h): Supported 00:53:52.761 Set Features (09h): Supported 00:53:52.761 Get Features (0Ah): Supported 00:53:52.761 Asynchronous Event Request (0Ch): Supported 00:53:52.761 Keep Alive (18h): Supported 00:53:52.761 I/O Commands 00:53:52.761 ------------ 00:53:52.761 Flush (00h): Supported LBA-Change 00:53:52.761 Write (01h): Supported LBA-Change 00:53:52.761 Read (02h): Supported 00:53:52.761 Compare (05h): Supported 00:53:52.761 Write Zeroes (08h): Supported LBA-Change 00:53:52.761 Dataset Management (09h): Supported LBA-Change 00:53:52.761 Copy (19h): Supported LBA-Change 00:53:52.761 00:53:52.761 Error Log 00:53:52.761 ========= 00:53:52.761 00:53:52.761 Arbitration 00:53:52.761 =========== 00:53:52.761 Arbitration Burst: 1 00:53:52.761 00:53:52.761 Power Management 00:53:52.761 ================ 00:53:52.761 Number of Power States: 1 00:53:52.761 Current Power State: Power State #0 00:53:52.761 Power State #0: 00:53:52.761 Max Power: 0.00 W 00:53:52.761 Non-Operational State: Operational 00:53:52.761 Entry Latency: Not Reported 00:53:52.761 Exit Latency: Not Reported 00:53:52.761 Relative Read Throughput: 0 00:53:52.761 Relative Read Latency: 0 00:53:52.761 Relative Write Throughput: 0 00:53:52.761 Relative Write Latency: 0 00:53:52.761 Idle Power: Not Reported 00:53:52.761 Active Power: Not Reported 00:53:52.761 Non-Operational Permissive Mode: Not Supported 00:53:52.761 00:53:52.761 Health Information 00:53:52.761 ================== 00:53:52.761 Critical Warnings: 00:53:52.761 Available Spare Space: OK 00:53:52.761 Temperature: OK 00:53:52.761 Device Reliability: OK 00:53:52.761 Read Only: No 00:53:52.761 Volatile Memory Backup: OK 00:53:52.761 Current Temperature: 0 Kelvin (-273 Celsius) 00:53:52.761 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:53:52.761 Available Spare: 0% 00:53:52.761 Available Spare Threshold: 0% 00:53:52.761 Life Percentage Used:[2024-12-09 10:04:59.628749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.628756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xee6d90) 00:53:52.761 [2024-12-09 10:04:59.628767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.761 [2024-12-09 10:04:59.628806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf28080, cid 7, qid 0 00:53:52.761 [2024-12-09 10:04:59.628864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.761 [2024-12-09 10:04:59.628871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.761 [2024-12-09 10:04:59.628876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.628880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf28080) on tqpair=0xee6d90 00:53:52.761 [2024-12-09 10:04:59.628934] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:53:52.761 [2024-12-09 10:04:59.628946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27600) on tqpair=0xee6d90 00:53:52.761 [2024-12-09 10:04:59.628954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:52.761 [2024-12-09 10:04:59.628960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27780) on tqpair=0xee6d90 00:53:52.761 [2024-12-09 10:04:59.628965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:52.761 [2024-12-09 10:04:59.628971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27900) on tqpair=0xee6d90 00:53:52.761 [2024-12-09 10:04:59.628976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:52.761 [2024-12-09 10:04:59.628981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.761 [2024-12-09 10:04:59.628987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:52.761 [2024-12-09 10:04:59.628999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.761 [2024-12-09 10:04:59.629015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.761 [2024-12-09 10:04:59.629041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.761 [2024-12-09 10:04:59.629094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.761 [2024-12-09 10:04:59.629101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.761 [2024-12-09 10:04:59.629105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.761 [2024-12-09 10:04:59.629117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.761 [2024-12-09 10:04:59.629133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.761 [2024-12-09 10:04:59.629157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.761 [2024-12-09 10:04:59.629226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.761 [2024-12-09 10:04:59.629233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.761 [2024-12-09 10:04:59.629237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.761 [2024-12-09 10:04:59.629247] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:53:52.761 [2024-12-09 10:04:59.629253] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:53:52.761 [2024-12-09 10:04:59.629262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.761 [2024-12-09 10:04:59.629278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.761 [2024-12-09 10:04:59.629297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.761 [2024-12-09 10:04:59.629340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.761 [2024-12-09 10:04:59.629347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.761 [2024-12-09 10:04:59.629351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.761 [2024-12-09 10:04:59.629365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.761 [2024-12-09 10:04:59.629381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.761 [2024-12-09 10:04:59.629400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.761 [2024-12-09 10:04:59.629455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.761 [2024-12-09 10:04:59.629462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.761 [2024-12-09 10:04:59.629466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.761 [2024-12-09 10:04:59.629479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.761 [2024-12-09 10:04:59.629511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.761 [2024-12-09 10:04:59.629532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.761 [2024-12-09 10:04:59.629583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.761 [2024-12-09 10:04:59.629590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.761 [2024-12-09 10:04:59.629594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.761 [2024-12-09 10:04:59.629598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.761 [2024-12-09 10:04:59.629607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.629612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.629615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.762 [2024-12-09 10:04:59.629623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.762 [2024-12-09 10:04:59.629641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.762 [2024-12-09 10:04:59.629698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.762 [2024-12-09 10:04:59.629705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.762 [2024-12-09 10:04:59.629709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.629713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.762 [2024-12-09 10:04:59.629722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.629726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.629730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.762 [2024-12-09 10:04:59.629737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.762 [2024-12-09 10:04:59.629755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.762 [2024-12-09 10:04:59.629819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.762 [2024-12-09 10:04:59.629833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.762 [2024-12-09 10:04:59.629837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.629841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.762 [2024-12-09 10:04:59.629852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.629856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.629860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.762 [2024-12-09 10:04:59.629868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.762 [2024-12-09 10:04:59.629889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.762 [2024-12-09 10:04:59.629935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.762 [2024-12-09 10:04:59.629942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.762 [2024-12-09 10:04:59.629946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.629950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.762 [2024-12-09 10:04:59.629960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.629964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.629968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.762 [2024-12-09 10:04:59.629975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.762 [2024-12-09 10:04:59.629994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.762 [2024-12-09 10:04:59.630048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.762 [2024-12-09 10:04:59.630054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.762 [2024-12-09 10:04:59.630058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.762 [2024-12-09 10:04:59.630081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.762 [2024-12-09 10:04:59.630096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.762 [2024-12-09 10:04:59.630115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.762 [2024-12-09 10:04:59.630172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.762 [2024-12-09 10:04:59.630178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.762 [2024-12-09 10:04:59.630181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.762 [2024-12-09 10:04:59.630195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.762 [2024-12-09 10:04:59.630210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.762 [2024-12-09 10:04:59.630227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.762 [2024-12-09 10:04:59.630284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.762 [2024-12-09 10:04:59.630290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.762 [2024-12-09 10:04:59.630293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.762 [2024-12-09 10:04:59.630307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.762 [2024-12-09 10:04:59.630321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.762 [2024-12-09 10:04:59.630339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.762 [2024-12-09 10:04:59.630385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.762 [2024-12-09 10:04:59.630391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.762 [2024-12-09 10:04:59.630395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.762 [2024-12-09 10:04:59.630408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.762 [2024-12-09 10:04:59.630423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.762 [2024-12-09 10:04:59.630440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.762 [2024-12-09 10:04:59.630511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.762 [2024-12-09 10:04:59.630519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.762 [2024-12-09 10:04:59.630522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.762 [2024-12-09 10:04:59.630536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.762 [2024-12-09 10:04:59.630549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.762 [2024-12-09 10:04:59.630569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.762 [2024-12-09 10:04:59.630633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.762 [2024-12-09 10:04:59.630639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.762 [2024-12-09 10:04:59.630643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.762 [2024-12-09 10:04:59.630656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.762 [2024-12-09 10:04:59.630671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.762 [2024-12-09 10:04:59.630689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.762 [2024-12-09 10:04:59.630736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.762 [2024-12-09 10:04:59.630742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.762 [2024-12-09 10:04:59.630745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.762 [2024-12-09 10:04:59.630757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.762 [2024-12-09 10:04:59.630772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.762 [2024-12-09 10:04:59.630790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.762 [2024-12-09 10:04:59.630855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.762 [2024-12-09 10:04:59.630861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.762 [2024-12-09 10:04:59.630864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.762 [2024-12-09 10:04:59.630877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.762 [2024-12-09 10:04:59.630891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.762 [2024-12-09 10:04:59.630909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.762 [2024-12-09 10:04:59.630962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.762 [2024-12-09 10:04:59.630968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.762 [2024-12-09 10:04:59.630972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.762 [2024-12-09 10:04:59.630984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.762 [2024-12-09 10:04:59.630993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.630999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.631017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.763 [2024-12-09 10:04:59.631075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.763 [2024-12-09 10:04:59.631097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.763 [2024-12-09 10:04:59.631102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.763 [2024-12-09 10:04:59.631115] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.631143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.631161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.763 [2024-12-09 10:04:59.631218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.763 [2024-12-09 10:04:59.631225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.763 [2024-12-09 10:04:59.631228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.763 [2024-12-09 10:04:59.631241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.631256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.631274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.763 [2024-12-09 10:04:59.631341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.763 [2024-12-09 10:04:59.631349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.763 [2024-12-09 10:04:59.631353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.763 [2024-12-09 10:04:59.631367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.631382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.631401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.763 [2024-12-09 10:04:59.631457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.763 [2024-12-09 10:04:59.631464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.763 [2024-12-09 10:04:59.631468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.763 [2024-12-09 10:04:59.631481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.631515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.631536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.763 [2024-12-09 10:04:59.631597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.763 [2024-12-09 10:04:59.631604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.763 [2024-12-09 10:04:59.631608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.763 [2024-12-09 10:04:59.631622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.631637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.631657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.763 [2024-12-09 10:04:59.631702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.763 [2024-12-09 10:04:59.631708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.763 [2024-12-09 10:04:59.631712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.763 [2024-12-09 10:04:59.631725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.631740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.631759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.763 [2024-12-09 10:04:59.631813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.763 [2024-12-09 10:04:59.631819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.763 [2024-12-09 10:04:59.631823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.763 [2024-12-09 10:04:59.631836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.631852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.631871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.763 [2024-12-09 10:04:59.631922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.763 [2024-12-09 10:04:59.631928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.763 [2024-12-09 10:04:59.631932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.763 [2024-12-09 10:04:59.631946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.631955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.631962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.631981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.763 [2024-12-09 10:04:59.632036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.763 [2024-12-09 10:04:59.632043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.763 [2024-12-09 10:04:59.632046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.632051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.763 [2024-12-09 10:04:59.632061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.632065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.632069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.632076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.632093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.763 [2024-12-09 10:04:59.632139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.763 [2024-12-09 10:04:59.632146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.763 [2024-12-09 10:04:59.632150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.632154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.763 [2024-12-09 10:04:59.632164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.632169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.632173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.632180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.632199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.763 [2024-12-09 10:04:59.632247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.763 [2024-12-09 10:04:59.632254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.763 [2024-12-09 10:04:59.632257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.632261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.763 [2024-12-09 10:04:59.632270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.632275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.632278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.632285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.632304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.763 [2024-12-09 10:04:59.632357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.763 [2024-12-09 10:04:59.632364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.763 [2024-12-09 10:04:59.632368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.632372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.763 [2024-12-09 10:04:59.632381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.632385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.763 [2024-12-09 10:04:59.632389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.763 [2024-12-09 10:04:59.632396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.763 [2024-12-09 10:04:59.632414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.764 [2024-12-09 10:04:59.632463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.764 [2024-12-09 10:04:59.632481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.764 [2024-12-09 10:04:59.636525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.764 [2024-12-09 10:04:59.636548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.764 [2024-12-09 10:04:59.636568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:53:52.764 [2024-12-09 10:04:59.636574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:53:52.764 [2024-12-09 10:04:59.636578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee6d90) 00:53:52.764 [2024-12-09 10:04:59.636587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:52.764 [2024-12-09 10:04:59.636618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf27a80, cid 3, qid 0 00:53:52.764 [2024-12-09 10:04:59.636674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:53:52.764 [2024-12-09 10:04:59.636681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:53:52.764 [2024-12-09 10:04:59.636685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:53:52.764 [2024-12-09 10:04:59.636689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf27a80) on tqpair=0xee6d90 00:53:52.764 [2024-12-09 10:04:59.636697] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:53:52.764 0% 00:53:52.764 Data Units Read: 0 00:53:52.764 Data Units Written: 0 00:53:52.764 Host Read Commands: 0 00:53:52.764 Host Write Commands: 0 00:53:52.764 Controller Busy Time: 0 minutes 00:53:52.764 Power Cycles: 0 00:53:52.764 Power On Hours: 0 hours 00:53:52.764 Unsafe Shutdowns: 0 00:53:52.764 Unrecoverable Media Errors: 0 00:53:52.764 Lifetime Error Log Entries: 0 00:53:52.764 Warning Temperature Time: 0 minutes 00:53:52.764 Critical Temperature Time: 0 minutes 00:53:52.764 00:53:52.764 Number of Queues 00:53:52.764 ================ 00:53:52.764 Number of I/O Submission Queues: 127 00:53:52.764 Number of I/O Completion Queues: 127 00:53:52.764 00:53:52.764 Active Namespaces 00:53:52.764 ================= 00:53:52.764 Namespace ID:1 00:53:52.764 Error Recovery Timeout: Unlimited 00:53:52.764 Command Set Identifier: NVM (00h) 00:53:52.764 Deallocate: Supported 00:53:52.764 Deallocated/Unwritten Error: Not Supported 00:53:52.764 Deallocated Read Value: Unknown 00:53:52.764 Deallocate in Write Zeroes: Not Supported 00:53:52.764 Deallocated Guard Field: 0xFFFF 00:53:52.764 Flush: Supported 00:53:52.764 Reservation: Supported 00:53:52.764 Namespace Sharing Capabilities: Multiple Controllers 00:53:52.764 Size (in LBAs): 131072 (0GiB) 00:53:52.764 Capacity (in LBAs): 131072 (0GiB) 00:53:52.764 Utilization (in LBAs): 131072 (0GiB) 00:53:52.764 NGUID: ABCDEF0123456789ABCDEF0123456789 00:53:52.764 EUI64: ABCDEF0123456789 00:53:52.764 UUID: 04035ba5-8e14-43fc-9f08-b0ea25d7bd23 00:53:52.764 Thin Provisioning: Not Supported 00:53:52.764 Per-NS Atomic Units: Yes 00:53:52.764 Atomic Boundary Size (Normal): 0 00:53:52.764 Atomic Boundary Size (PFail): 0 00:53:52.764 Atomic Boundary Offset: 0 00:53:52.764 Maximum Single Source Range Length: 65535 00:53:52.764 Maximum Copy Length: 65535 00:53:52.764 Maximum Source Range Count: 1 00:53:52.764 NGUID/EUI64 Never Reused: No 00:53:52.764 Namespace Write Protected: No 00:53:52.764 Number of LBA Formats: 1 00:53:52.764 Current LBA Format: LBA Format #00 00:53:52.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:53:52.764 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:52.764 rmmod nvme_tcp 00:53:52.764 rmmod nvme_fabrics 00:53:52.764 rmmod nvme_keyring 00:53:52.764 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 87907 ']' 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 87907 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 87907 ']' 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 87907 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87907 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87907' 00:53:53.023 killing process with pid 87907 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 87907 00:53:53.023 10:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 87907 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:53.282 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:53.541 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:53.541 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:53.541 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:53.541 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:53.541 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:53.542 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:53.542 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:53.542 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:53.542 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:53:53.542 00:53:53.542 real 0m3.282s 00:53:53.542 user 0m7.930s 00:53:53.542 sys 0m0.988s 00:53:53.542 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:53.542 ************************************ 00:53:53.542 END TEST nvmf_identify 00:53:53.542 ************************************ 00:53:53.542 10:05:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:53:53.542 10:05:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:53:53.542 10:05:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:53.542 10:05:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:53.542 10:05:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:53:53.542 ************************************ 00:53:53.542 START TEST nvmf_perf 00:53:53.542 ************************************ 00:53:53.542 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:53:53.801 * Looking for test storage... 00:53:53.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:53:53.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:53.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:53.802 --rc genhtml_branch_coverage=1 00:53:53.802 --rc genhtml_function_coverage=1 00:53:53.802 --rc genhtml_legend=1 00:53:53.802 --rc geninfo_all_blocks=1 00:53:53.802 --rc geninfo_unexecuted_blocks=1 00:53:53.802 00:53:53.802 ' 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:53.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:53.802 --rc genhtml_branch_coverage=1 00:53:53.802 --rc genhtml_function_coverage=1 00:53:53.802 --rc genhtml_legend=1 00:53:53.802 --rc geninfo_all_blocks=1 00:53:53.802 --rc geninfo_unexecuted_blocks=1 00:53:53.802 00:53:53.802 ' 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:53.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:53.802 --rc genhtml_branch_coverage=1 00:53:53.802 --rc genhtml_function_coverage=1 00:53:53.802 --rc genhtml_legend=1 00:53:53.802 --rc geninfo_all_blocks=1 00:53:53.802 --rc geninfo_unexecuted_blocks=1 00:53:53.802 00:53:53.802 ' 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:53.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:53.802 --rc genhtml_branch_coverage=1 00:53:53.802 --rc genhtml_function_coverage=1 00:53:53.802 --rc genhtml_legend=1 00:53:53.802 --rc geninfo_all_blocks=1 00:53:53.802 --rc geninfo_unexecuted_blocks=1 00:53:53.802 00:53:53.802 ' 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:53.802 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:53.802 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:53.802 Cannot find device "nvmf_init_br" 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:54.062 Cannot find device "nvmf_init_br2" 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:54.062 Cannot find device "nvmf_tgt_br" 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:54.062 Cannot find device "nvmf_tgt_br2" 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:54.062 Cannot find device "nvmf_init_br" 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:54.062 Cannot find device "nvmf_init_br2" 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:54.062 Cannot find device "nvmf_tgt_br" 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:54.062 Cannot find device "nvmf_tgt_br2" 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:54.062 Cannot find device "nvmf_br" 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:54.062 Cannot find device "nvmf_init_if" 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:53:54.062 10:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:54.062 Cannot find device "nvmf_init_if2" 00:53:54.062 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:53:54.062 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:54.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:54.062 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:53:54.062 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:54.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:54.062 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:53:54.063 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:54.063 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:54.063 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:54.063 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:54.063 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:54.063 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:54.322 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:54.323 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:54.323 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:53:54.323 00:53:54.323 --- 10.0.0.3 ping statistics --- 00:53:54.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:54.323 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:54.323 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:54.323 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.148 ms 00:53:54.323 00:53:54.323 --- 10.0.0.4 ping statistics --- 00:53:54.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:54.323 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:54.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:54.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:53:54.323 00:53:54.323 --- 10.0.0.1 ping statistics --- 00:53:54.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:54.323 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:54.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:54.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:53:54.323 00:53:54.323 --- 10.0.0.2 ping statistics --- 00:53:54.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:54.323 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:54.323 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=88188 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 88188 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 88188 ']' 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:54.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:54.582 10:05:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:53:54.582 [2024-12-09 10:05:01.443787] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:53:54.582 [2024-12-09 10:05:01.443970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:54.582 [2024-12-09 10:05:01.600678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:53:54.841 [2024-12-09 10:05:01.683834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:54.841 [2024-12-09 10:05:01.684015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:54.841 [2024-12-09 10:05:01.684060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:54.841 [2024-12-09 10:05:01.684093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:54.841 [2024-12-09 10:05:01.684113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:54.841 [2024-12-09 10:05:01.685568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:54.841 [2024-12-09 10:05:01.685781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:54.841 [2024-12-09 10:05:01.686003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:53:54.841 [2024-12-09 10:05:01.686005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:55.408 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:55.408 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:53:55.408 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:55.408 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:55.408 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:53:55.408 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:55.408 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:53:55.408 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:53:55.976 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:53:55.976 10:05:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:53:56.235 10:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:53:56.235 10:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:53:56.494 10:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:53:56.494 10:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:53:56.494 10:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:53:56.494 10:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:53:56.494 10:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:53:56.753 [2024-12-09 10:05:03.550972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:56.753 10:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:53:57.012 10:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:53:57.012 10:05:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:53:57.012 10:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:53:57.012 10:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:53:57.272 10:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:53:57.531 [2024-12-09 10:05:04.470418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:57.531 10:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:53:57.793 10:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:53:57.793 10:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:53:57.793 10:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:53:57.793 10:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:53:59.172 Initializing NVMe Controllers 00:53:59.172 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:53:59.172 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:53:59.172 Initialization complete. Launching workers. 00:53:59.172 ======================================================== 00:53:59.172 Latency(us) 00:53:59.172 Device Information : IOPS MiB/s Average min max 00:53:59.172 PCIE (0000:00:10.0) NSID 1 from core 0: 21472.00 83.88 1490.89 281.06 7564.57 00:53:59.172 ======================================================== 00:53:59.172 Total : 21472.00 83.88 1490.89 281.06 7564.57 00:53:59.172 00:53:59.172 10:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:54:00.108 Initializing NVMe Controllers 00:54:00.108 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:54:00.108 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:54:00.108 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:54:00.108 Initialization complete. Launching workers. 00:54:00.108 ======================================================== 00:54:00.108 Latency(us) 00:54:00.108 Device Information : IOPS MiB/s Average min max 00:54:00.108 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5148.15 20.11 194.01 73.13 4195.10 00:54:00.108 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.77 0.48 8145.50 7967.10 12037.66 00:54:00.108 ======================================================== 00:54:00.108 Total : 5270.91 20.59 379.21 73.13 12037.66 00:54:00.108 00:54:00.367 10:05:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:54:01.777 Initializing NVMe Controllers 00:54:01.777 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:54:01.777 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:54:01.777 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:54:01.777 Initialization complete. Launching workers. 00:54:01.777 ======================================================== 00:54:01.777 Latency(us) 00:54:01.777 Device Information : IOPS MiB/s Average min max 00:54:01.777 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10673.95 41.70 2998.00 477.45 10124.22 00:54:01.777 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2646.99 10.34 12167.35 4774.45 24178.27 00:54:01.777 ======================================================== 00:54:01.777 Total : 13320.93 52.03 4820.03 477.45 24178.27 00:54:01.777 00:54:01.777 10:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:54:01.777 10:05:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:54:04.316 Initializing NVMe Controllers 00:54:04.316 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:54:04.316 Controller IO queue size 128, less than required. 00:54:04.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:04.316 Controller IO queue size 128, less than required. 00:54:04.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:04.316 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:54:04.316 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:54:04.316 Initialization complete. Launching workers. 00:54:04.316 ======================================================== 00:54:04.316 Latency(us) 00:54:04.316 Device Information : IOPS MiB/s Average min max 00:54:04.316 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2066.41 516.60 62856.73 38975.54 115240.14 00:54:04.316 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 614.47 153.62 226121.59 94699.64 382983.27 00:54:04.316 ======================================================== 00:54:04.316 Total : 2680.88 670.22 100277.94 38975.54 382983.27 00:54:04.316 00:54:04.316 10:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:54:04.575 Initializing NVMe Controllers 00:54:04.575 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:54:04.575 Controller IO queue size 128, less than required. 00:54:04.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:04.575 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:54:04.575 Controller IO queue size 128, less than required. 00:54:04.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:04.575 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:54:04.575 WARNING: Some requested NVMe devices were skipped 00:54:04.575 No valid NVMe controllers or AIO or URING devices found 00:54:04.575 10:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:54:07.111 Initializing NVMe Controllers 00:54:07.111 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:54:07.111 Controller IO queue size 128, less than required. 00:54:07.111 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:07.111 Controller IO queue size 128, less than required. 00:54:07.111 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:07.111 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:54:07.111 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:54:07.111 Initialization complete. Launching workers. 00:54:07.111 00:54:07.111 ==================== 00:54:07.111 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:54:07.111 TCP transport: 00:54:07.111 polls: 13520 00:54:07.111 idle_polls: 7407 00:54:07.111 sock_completions: 6113 00:54:07.111 nvme_completions: 3905 00:54:07.111 submitted_requests: 5908 00:54:07.111 queued_requests: 1 00:54:07.111 00:54:07.111 ==================== 00:54:07.111 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:54:07.111 TCP transport: 00:54:07.111 polls: 13535 00:54:07.111 idle_polls: 9384 00:54:07.111 sock_completions: 4151 00:54:07.111 nvme_completions: 7493 00:54:07.111 submitted_requests: 11274 00:54:07.111 queued_requests: 1 00:54:07.111 ======================================================== 00:54:07.111 Latency(us) 00:54:07.111 Device Information : IOPS MiB/s Average min max 00:54:07.111 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 974.92 243.73 134280.77 73172.94 191273.95 00:54:07.111 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1870.92 467.73 68629.66 34897.25 117547.44 00:54:07.111 ======================================================== 00:54:07.111 Total : 2845.84 711.46 91120.18 34897.25 191273.95 00:54:07.111 00:54:07.111 10:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:54:07.111 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:54:07.370 rmmod nvme_tcp 00:54:07.370 rmmod nvme_fabrics 00:54:07.370 rmmod nvme_keyring 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 88188 ']' 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 88188 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 88188 ']' 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 88188 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88188 00:54:07.370 killing process with pid 88188 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88188' 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 88188 00:54:07.370 10:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 88188 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:54:09.297 10:05:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:54:09.297 00:54:09.297 real 0m15.550s 00:54:09.297 user 0m55.656s 00:54:09.297 sys 0m3.775s 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:54:09.297 ************************************ 00:54:09.297 END TEST nvmf_perf 00:54:09.297 ************************************ 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:54:09.297 ************************************ 00:54:09.297 START TEST nvmf_fio_host 00:54:09.297 ************************************ 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:54:09.297 * Looking for test storage... 00:54:09.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:09.297 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:54:09.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:09.562 --rc genhtml_branch_coverage=1 00:54:09.562 --rc genhtml_function_coverage=1 00:54:09.562 --rc genhtml_legend=1 00:54:09.562 --rc geninfo_all_blocks=1 00:54:09.562 --rc geninfo_unexecuted_blocks=1 00:54:09.562 00:54:09.562 ' 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:54:09.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:09.562 --rc genhtml_branch_coverage=1 00:54:09.562 --rc genhtml_function_coverage=1 00:54:09.562 --rc genhtml_legend=1 00:54:09.562 --rc geninfo_all_blocks=1 00:54:09.562 --rc geninfo_unexecuted_blocks=1 00:54:09.562 00:54:09.562 ' 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:54:09.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:09.562 --rc genhtml_branch_coverage=1 00:54:09.562 --rc genhtml_function_coverage=1 00:54:09.562 --rc genhtml_legend=1 00:54:09.562 --rc geninfo_all_blocks=1 00:54:09.562 --rc geninfo_unexecuted_blocks=1 00:54:09.562 00:54:09.562 ' 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:54:09.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:09.562 --rc genhtml_branch_coverage=1 00:54:09.562 --rc genhtml_function_coverage=1 00:54:09.562 --rc genhtml_legend=1 00:54:09.562 --rc geninfo_all_blocks=1 00:54:09.562 --rc geninfo_unexecuted_blocks=1 00:54:09.562 00:54:09.562 ' 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:09.562 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:54:09.563 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:54:09.563 Cannot find device "nvmf_init_br" 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:54:09.563 Cannot find device "nvmf_init_br2" 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:54:09.563 Cannot find device "nvmf_tgt_br" 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:54:09.563 Cannot find device "nvmf_tgt_br2" 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:54:09.563 Cannot find device "nvmf_init_br" 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:54:09.563 Cannot find device "nvmf_init_br2" 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:54:09.563 Cannot find device "nvmf_tgt_br" 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:54:09.563 Cannot find device "nvmf_tgt_br2" 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:54:09.563 Cannot find device "nvmf_br" 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:54:09.563 Cannot find device "nvmf_init_if" 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:54:09.563 Cannot find device "nvmf_init_if2" 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:09.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:54:09.563 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:09.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:09.823 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:54:09.823 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:54:09.823 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:09.823 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:54:09.823 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:09.823 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:09.823 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:54:09.824 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:09.824 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:54:09.824 00:54:09.824 --- 10.0.0.3 ping statistics --- 00:54:09.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:09.824 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:54:09.824 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:54:09.824 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:54:09.824 00:54:09.824 --- 10.0.0.4 ping statistics --- 00:54:09.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:09.824 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:09.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:09.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:54:09.824 00:54:09.824 --- 10.0.0.1 ping statistics --- 00:54:09.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:09.824 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:54:09.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:09.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:54:09.824 00:54:09.824 --- 10.0.0.2 ping statistics --- 00:54:09.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:09.824 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=88732 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 88732 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 88732 ']' 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:09.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:09.824 10:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:54:10.084 [2024-12-09 10:05:16.883588] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:10.084 [2024-12-09 10:05:16.883666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:10.084 [2024-12-09 10:05:17.036923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:54:10.084 [2024-12-09 10:05:17.084718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:10.084 [2024-12-09 10:05:17.084781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:10.084 [2024-12-09 10:05:17.084788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:10.084 [2024-12-09 10:05:17.084792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:10.084 [2024-12-09 10:05:17.084797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:10.084 [2024-12-09 10:05:17.085879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:10.084 [2024-12-09 10:05:17.086085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:10.084 [2024-12-09 10:05:17.086186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:10.084 [2024-12-09 10:05:17.086190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:54:11.023 10:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:11.023 10:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:54:11.023 10:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:54:11.023 [2024-12-09 10:05:17.958763] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:11.023 10:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:54:11.023 10:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:11.023 10:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:54:11.023 10:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:54:11.283 Malloc1 00:54:11.283 10:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:54:11.543 10:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:54:11.802 10:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:54:12.062 [2024-12-09 10:05:18.933116] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:12.062 10:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:54:12.321 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:54:12.322 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:54:12.322 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:54:12.322 10:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:54:12.581 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:54:12.581 fio-3.35 00:54:12.581 Starting 1 thread 00:54:15.192 00:54:15.192 test: (groupid=0, jobs=1): err= 0: pid=88858: Mon Dec 9 10:05:21 2024 00:54:15.192 read: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(82.5MiB/2005msec) 00:54:15.192 slat (nsec): min=1523, max=502654, avg=1885.00, stdev=4533.32 00:54:15.192 clat (usec): min=4329, max=13554, avg=6394.58, stdev=661.80 00:54:15.192 lat (usec): min=4332, max=13567, avg=6396.46, stdev=662.15 00:54:15.192 clat percentiles (usec): 00:54:15.192 | 1.00th=[ 5211], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:54:15.192 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6456], 00:54:15.192 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 7111], 95.00th=[ 7373], 00:54:15.192 | 99.00th=[ 8291], 99.50th=[ 9110], 99.90th=[12911], 99.95th=[13435], 00:54:15.192 | 99.99th=[13566] 00:54:15.192 bw ( KiB/s): min=40968, max=42848, per=99.92%, avg=42082.00, stdev=794.34, samples=4 00:54:15.192 iops : min=10242, max=10712, avg=10520.50, stdev=198.58, samples=4 00:54:15.192 write: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(82.4MiB/2005msec); 0 zone resets 00:54:15.192 slat (nsec): min=1559, max=353061, avg=1918.20, stdev=2658.89 00:54:15.192 clat (usec): min=3498, max=10688, avg=5722.16, stdev=541.89 00:54:15.192 lat (usec): min=3500, max=10690, avg=5724.08, stdev=542.10 00:54:15.192 clat percentiles (usec): 00:54:15.192 | 1.00th=[ 4621], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5342], 00:54:15.192 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5669], 60.00th=[ 5800], 00:54:15.192 | 70.00th=[ 5932], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6587], 00:54:15.192 | 99.00th=[ 7242], 99.50th=[ 7832], 99.90th=[ 9372], 99.95th=[ 9765], 00:54:15.192 | 99.99th=[10552] 00:54:15.192 bw ( KiB/s): min=41464, max=42592, per=99.98%, avg=42090.00, stdev=503.06, samples=4 00:54:15.192 iops : min=10366, max=10648, avg=10522.50, stdev=125.77, samples=4 00:54:15.192 lat (msec) : 4=0.06%, 10=99.79%, 20=0.15% 00:54:15.192 cpu : usr=69.61%, sys=23.70%, ctx=13, majf=0, minf=7 00:54:15.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:54:15.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:15.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:54:15.192 issued rwts: total=21111,21102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:15.192 latency : target=0, window=0, percentile=100.00%, depth=128 00:54:15.192 00:54:15.192 Run status group 0 (all jobs): 00:54:15.192 READ: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=82.5MiB (86.5MB), run=2005-2005msec 00:54:15.192 WRITE: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=82.4MiB (86.4MB), run=2005-2005msec 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:54:15.192 10:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:54:15.192 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:54:15.192 fio-3.35 00:54:15.192 Starting 1 thread 00:54:17.729 00:54:17.729 test: (groupid=0, jobs=1): err= 0: pid=88905: Mon Dec 9 10:05:24 2024 00:54:17.729 read: IOPS=9499, BW=148MiB/s (156MB/s)(298MiB/2009msec) 00:54:17.729 slat (usec): min=2, max=100, avg= 3.21, stdev= 3.30 00:54:17.729 clat (usec): min=1793, max=34097, avg=7935.09, stdev=3050.42 00:54:17.729 lat (usec): min=1795, max=34115, avg=7938.30, stdev=3052.61 00:54:17.729 clat percentiles (usec): 00:54:17.729 | 1.00th=[ 3916], 5.00th=[ 4752], 10.00th=[ 5342], 20.00th=[ 5997], 00:54:17.729 | 30.00th=[ 6521], 40.00th=[ 7111], 50.00th=[ 7701], 60.00th=[ 8225], 00:54:17.729 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[10028], 95.00th=[11207], 00:54:17.729 | 99.00th=[23987], 99.50th=[31327], 99.90th=[32375], 99.95th=[33424], 00:54:17.729 | 99.99th=[33817] 00:54:17.729 bw ( KiB/s): min=73472, max=76192, per=49.11%, avg=74648.00, stdev=1133.29, samples=4 00:54:17.729 iops : min= 4592, max= 4762, avg=4665.50, stdev=70.83, samples=4 00:54:17.729 write: IOPS=5509, BW=86.1MiB/s (90.3MB/s)(153MiB/1774msec); 0 zone resets 00:54:17.729 slat (usec): min=27, max=831, avg=32.99, stdev=11.80 00:54:17.729 clat (usec): min=2516, max=44840, avg=9996.31, stdev=3425.30 00:54:17.729 lat (usec): min=2550, max=44897, avg=10029.30, stdev=3430.11 00:54:17.729 clat percentiles (usec): 00:54:17.729 | 1.00th=[ 6390], 5.00th=[ 7242], 10.00th=[ 7570], 20.00th=[ 8094], 00:54:17.729 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10028], 00:54:17.729 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11994], 95.00th=[13173], 00:54:17.729 | 99.00th=[31327], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:54:17.729 | 99.99th=[44827] 00:54:17.729 bw ( KiB/s): min=76160, max=80192, per=88.36%, avg=77888.00, stdev=1713.32, samples=4 00:54:17.729 iops : min= 4760, max= 5012, avg=4868.00, stdev=107.08, samples=4 00:54:17.729 lat (msec) : 2=0.02%, 4=0.87%, 10=79.18%, 20=18.55%, 50=1.38% 00:54:17.729 cpu : usr=75.50%, sys=15.74%, ctx=34, majf=0, minf=18 00:54:17.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:54:17.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:17.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:54:17.729 issued rwts: total=19085,9773,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:17.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:54:17.729 00:54:17.729 Run status group 0 (all jobs): 00:54:17.729 READ: bw=148MiB/s (156MB/s), 148MiB/s-148MiB/s (156MB/s-156MB/s), io=298MiB (313MB), run=2009-2009msec 00:54:17.729 WRITE: bw=86.1MiB/s (90.3MB/s), 86.1MiB/s-86.1MiB/s (90.3MB/s-90.3MB/s), io=153MiB (160MB), run=1774-1774msec 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:54:17.729 rmmod nvme_tcp 00:54:17.729 rmmod nvme_fabrics 00:54:17.729 rmmod nvme_keyring 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 88732 ']' 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 88732 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 88732 ']' 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 88732 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88732 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:17.729 killing process with pid 88732 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88732' 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 88732 00:54:17.729 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 88732 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:54:17.988 10:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:54:17.988 10:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:54:17.988 10:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:54:18.246 00:54:18.246 real 0m8.991s 00:54:18.246 user 0m35.133s 00:54:18.246 sys 0m2.368s 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:54:18.246 ************************************ 00:54:18.246 END TEST nvmf_fio_host 00:54:18.246 ************************************ 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:54:18.246 ************************************ 00:54:18.246 START TEST nvmf_failover 00:54:18.246 ************************************ 00:54:18.246 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:54:18.504 * Looking for test storage... 00:54:18.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:18.504 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:54:18.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:18.504 --rc genhtml_branch_coverage=1 00:54:18.504 --rc genhtml_function_coverage=1 00:54:18.505 --rc genhtml_legend=1 00:54:18.505 --rc geninfo_all_blocks=1 00:54:18.505 --rc geninfo_unexecuted_blocks=1 00:54:18.505 00:54:18.505 ' 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:54:18.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:18.505 --rc genhtml_branch_coverage=1 00:54:18.505 --rc genhtml_function_coverage=1 00:54:18.505 --rc genhtml_legend=1 00:54:18.505 --rc geninfo_all_blocks=1 00:54:18.505 --rc geninfo_unexecuted_blocks=1 00:54:18.505 00:54:18.505 ' 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:54:18.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:18.505 --rc genhtml_branch_coverage=1 00:54:18.505 --rc genhtml_function_coverage=1 00:54:18.505 --rc genhtml_legend=1 00:54:18.505 --rc geninfo_all_blocks=1 00:54:18.505 --rc geninfo_unexecuted_blocks=1 00:54:18.505 00:54:18.505 ' 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:54:18.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:18.505 --rc genhtml_branch_coverage=1 00:54:18.505 --rc genhtml_function_coverage=1 00:54:18.505 --rc genhtml_legend=1 00:54:18.505 --rc geninfo_all_blocks=1 00:54:18.505 --rc geninfo_unexecuted_blocks=1 00:54:18.505 00:54:18.505 ' 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:54:18.505 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:54:18.505 Cannot find device "nvmf_init_br" 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:54:18.505 Cannot find device "nvmf_init_br2" 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:54:18.505 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:54:18.765 Cannot find device "nvmf_tgt_br" 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:54:18.765 Cannot find device "nvmf_tgt_br2" 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:54:18.765 Cannot find device "nvmf_init_br" 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:54:18.765 Cannot find device "nvmf_init_br2" 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:54:18.765 Cannot find device "nvmf_tgt_br" 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:54:18.765 Cannot find device "nvmf_tgt_br2" 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:54:18.765 Cannot find device "nvmf_br" 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:54:18.765 Cannot find device "nvmf_init_if" 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:54:18.765 Cannot find device "nvmf_init_if2" 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:18.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:18.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:18.765 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:54:19.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:19.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.147 ms 00:54:19.024 00:54:19.024 --- 10.0.0.3 ping statistics --- 00:54:19.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:19.024 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:54:19.024 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:54:19.024 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.102 ms 00:54:19.024 00:54:19.024 --- 10.0.0.4 ping statistics --- 00:54:19.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:19.024 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:19.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:19.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:54:19.024 00:54:19.024 --- 10.0.0.1 ping statistics --- 00:54:19.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:19.024 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:54:19.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:19.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:54:19.024 00:54:19.024 --- 10.0.0.2 ping statistics --- 00:54:19.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:19.024 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=89182 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 89182 00:54:19.024 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:54:19.025 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 89182 ']' 00:54:19.025 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:19.025 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:19.025 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:19.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:19.025 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:19.025 10:05:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:54:19.025 [2024-12-09 10:05:25.913452] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:19.025 [2024-12-09 10:05:25.913545] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:19.025 [2024-12-09 10:05:26.064576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:54:19.283 [2024-12-09 10:05:26.118120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:19.283 [2024-12-09 10:05:26.118177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:19.283 [2024-12-09 10:05:26.118183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:19.283 [2024-12-09 10:05:26.118188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:19.283 [2024-12-09 10:05:26.118192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:19.283 [2024-12-09 10:05:26.119081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:19.283 [2024-12-09 10:05:26.119194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:19.283 [2024-12-09 10:05:26.119198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:54:19.910 10:05:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:19.910 10:05:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:54:19.910 10:05:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:19.910 10:05:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:19.910 10:05:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:54:19.910 10:05:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:19.910 10:05:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:54:20.169 [2024-12-09 10:05:27.064409] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:20.169 10:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:54:20.427 Malloc0 00:54:20.427 10:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:54:20.686 10:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:54:20.944 10:05:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:54:21.203 [2024-12-09 10:05:28.010143] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:21.203 10:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:54:21.203 [2024-12-09 10:05:28.222109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:54:21.203 10:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:54:21.462 [2024-12-09 10:05:28.434285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:54:21.462 10:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=89288 00:54:21.462 10:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:54:21.462 10:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:54:21.462 10:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 89288 /var/tmp/bdevperf.sock 00:54:21.462 10:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 89288 ']' 00:54:21.462 10:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:21.462 10:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:21.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:21.462 10:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:21.462 10:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:21.462 10:05:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:54:22.397 10:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:22.397 10:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:54:22.655 10:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:54:22.914 NVMe0n1 00:54:22.914 10:05:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:54:23.172 00:54:23.172 10:05:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=89336 00:54:23.172 10:05:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:54:23.172 10:05:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:54:24.108 10:05:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:54:24.365 10:05:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:54:27.650 10:05:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:54:27.650 00:54:27.650 10:05:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:54:27.909 [2024-12-09 10:05:34.886233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.909 [2024-12-09 10:05:34.886426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.910 [2024-12-09 10:05:34.886986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.886994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.886999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.887005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.887010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.887016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.887024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.887033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.887038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.887044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.887050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.887056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.887064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 [2024-12-09 10:05:34.887072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a290 is same with the state(6) to be set 00:54:27.911 10:05:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:54:31.197 10:05:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:54:31.197 [2024-12-09 10:05:38.135470] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:31.197 10:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:54:32.130 10:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:54:32.388 [2024-12-09 10:05:39.379270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 [2024-12-09 10:05:39.379472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654e50 is same with the state(6) to be set 00:54:32.388 10:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 89336 00:54:38.954 { 00:54:38.954 "results": [ 00:54:38.954 { 00:54:38.954 "job": "NVMe0n1", 00:54:38.954 "core_mask": "0x1", 00:54:38.954 "workload": "verify", 00:54:38.954 "status": "finished", 00:54:38.954 "verify_range": { 00:54:38.954 "start": 0, 00:54:38.954 "length": 16384 00:54:38.954 }, 00:54:38.954 "queue_depth": 128, 00:54:38.954 "io_size": 4096, 00:54:38.954 "runtime": 15.007554, 00:54:38.954 "iops": 9911.941679503536, 00:54:38.954 "mibps": 38.71852218556069, 00:54:38.954 "io_failed": 4389, 00:54:38.954 "io_timeout": 0, 00:54:38.954 "avg_latency_us": 12520.40876334808, 00:54:38.954 "min_latency_us": 504.3982532751092, 00:54:38.954 "max_latency_us": 17056.53100436681 00:54:38.954 } 00:54:38.954 ], 00:54:38.954 "core_count": 1 00:54:38.954 } 00:54:38.954 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 89288 00:54:38.954 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 89288 ']' 00:54:38.954 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 89288 00:54:38.954 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:54:38.954 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:38.954 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89288 00:54:38.954 killing process with pid 89288 00:54:38.954 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:38.954 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:38.954 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89288' 00:54:38.954 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 89288 00:54:38.954 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 89288 00:54:38.954 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:54:38.954 [2024-12-09 10:05:28.514818] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:38.954 [2024-12-09 10:05:28.514917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89288 ] 00:54:38.954 [2024-12-09 10:05:28.665327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:38.954 [2024-12-09 10:05:28.743958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:38.954 Running I/O for 15 seconds... 00:54:38.954 10293.00 IOPS, 40.21 MiB/s [2024-12-09T10:05:45.998Z] [2024-12-09 10:05:31.308002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.954 [2024-12-09 10:05:31.308083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.954 [2024-12-09 10:05:31.308107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.954 [2024-12-09 10:05:31.308118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.954 [2024-12-09 10:05:31.308130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.954 [2024-12-09 10:05:31.308139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.954 [2024-12-09 10:05:31.308149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.954 [2024-12-09 10:05:31.308159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.954 [2024-12-09 10:05:31.308171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.954 [2024-12-09 10:05:31.308180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.954 [2024-12-09 10:05:31.308190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.954 [2024-12-09 10:05:31.308198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.954 [2024-12-09 10:05:31.308209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.954 [2024-12-09 10:05:31.308219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.954 [2024-12-09 10:05:31.308231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.954 [2024-12-09 10:05:31.308239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.954 [2024-12-09 10:05:31.308249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.954 [2024-12-09 10:05:31.308259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.954 [2024-12-09 10:05:31.308269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.954 [2024-12-09 10:05:31.308278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.954 [2024-12-09 10:05:31.308288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.954 [2024-12-09 10:05:31.308298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.955 [2024-12-09 10:05:31.308353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.955 [2024-12-09 10:05:31.308372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.955 [2024-12-09 10:05:31.308393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.955 [2024-12-09 10:05:31.308414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.955 [2024-12-09 10:05:31.308434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.955 [2024-12-09 10:05:31.308454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.955 [2024-12-09 10:05:31.308939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.955 [2024-12-09 10:05:31.308957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.955 [2024-12-09 10:05:31.308974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.308984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.955 [2024-12-09 10:05:31.308992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.309003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.955 [2024-12-09 10:05:31.309012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.309022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.955 [2024-12-09 10:05:31.309030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.309040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.955 [2024-12-09 10:05:31.309048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.955 [2024-12-09 10:05:31.309058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.956 [2024-12-09 10:05:31.309341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.956 [2024-12-09 10:05:31.309359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.956 [2024-12-09 10:05:31.309378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.956 [2024-12-09 10:05:31.309396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.956 [2024-12-09 10:05:31.309416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.956 [2024-12-09 10:05:31.309435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.956 [2024-12-09 10:05:31.309453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.956 [2024-12-09 10:05:31.309783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.956 [2024-12-09 10:05:31.309792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.309801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.957 [2024-12-09 10:05:31.309809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.309818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.957 [2024-12-09 10:05:31.309826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.309836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.957 [2024-12-09 10:05:31.309849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.309860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.957 [2024-12-09 10:05:31.309868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.309877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.309887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.309897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.309906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.309917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.309925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.309934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.309943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.309953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.309961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.309971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.309979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.309989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.309997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.957 [2024-12-09 10:05:31.310484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.957 [2024-12-09 10:05:31.310494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:31.310502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:31.310512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:31.310520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:31.310531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:31.310555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:31.310564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:31.310574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:31.310584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:31.310597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:31.310607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:31.310616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:31.310625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:31.310634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:31.310643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf40 is same with the state(6) to be set 00:54:38.958 [2024-12-09 10:05:31.310657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:54:38.958 [2024-12-09 10:05:31.310664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:54:38.958 [2024-12-09 10:05:31.310670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93904 len:8 PRP1 0x0 PRP2 0x0 00:54:38.958 [2024-12-09 10:05:31.310681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:31.310749] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:54:38.958 [2024-12-09 10:05:31.310809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:54:38.958 [2024-12-09 10:05:31.310820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:31.310830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:54:38.958 [2024-12-09 10:05:31.310839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:31.310848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:54:38.958 [2024-12-09 10:05:31.310857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:31.310866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:54:38.958 [2024-12-09 10:05:31.310874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:31.310883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:54:38.958 [2024-12-09 10:05:31.313678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:54:38.958 [2024-12-09 10:05:31.313712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7ca00 (9): Bad file descriptor 00:54:38.958 [2024-12-09 10:05:31.336246] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:54:38.958 9928.50 IOPS, 38.78 MiB/s [2024-12-09T10:05:46.002Z] 9950.33 IOPS, 38.87 MiB/s [2024-12-09T10:05:46.002Z] 9949.25 IOPS, 38.86 MiB/s [2024-12-09T10:05:46.002Z] [2024-12-09 10:05:34.887579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.887985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.887995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.888005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.888015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.888027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.888038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.888047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.888058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.888067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.888077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.888086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.958 [2024-12-09 10:05:34.888097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.958 [2024-12-09 10:05:34.888107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.959 [2024-12-09 10:05:34.888719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.959 [2024-12-09 10:05:34.888729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.888985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.888996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.889015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.889034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.889053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.889072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.889090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.889113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.889132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.889151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.889170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.960 [2024-12-09 10:05:34.889188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.960 [2024-12-09 10:05:34.889425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.960 [2024-12-09 10:05:34.889433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.889982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.889996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.890005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.890014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.890023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.890035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.890044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.890053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.890062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.890072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.890080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.890091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.890099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.890109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.890117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.890127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.961 [2024-12-09 10:05:34.890135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.961 [2024-12-09 10:05:34.890145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:34.890154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:34.890163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:34.890172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:34.890181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:34.890190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:34.890225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:54:38.962 [2024-12-09 10:05:34.890233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:54:38.962 [2024-12-09 10:05:34.890240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123584 len:8 PRP1 0x0 PRP2 0x0 00:54:38.962 [2024-12-09 10:05:34.890250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:34.890318] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:54:38.962 [2024-12-09 10:05:34.890364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:54:38.962 [2024-12-09 10:05:34.890375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:34.890386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:54:38.962 [2024-12-09 10:05:34.890394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:34.890404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:54:38.962 [2024-12-09 10:05:34.890412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:34.890421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:54:38.962 [2024-12-09 10:05:34.890430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:34.890439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:54:38.962 [2024-12-09 10:05:34.890493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7ca00 (9): Bad file descriptor 00:54:38.962 [2024-12-09 10:05:34.893286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:54:38.962 [2024-12-09 10:05:34.923907] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:54:38.962 9835.00 IOPS, 38.42 MiB/s [2024-12-09T10:05:46.006Z] 9919.00 IOPS, 38.75 MiB/s [2024-12-09T10:05:46.006Z] 9987.43 IOPS, 39.01 MiB/s [2024-12-09T10:05:46.006Z] 10069.00 IOPS, 39.33 MiB/s [2024-12-09T10:05:46.006Z] 10109.56 IOPS, 39.49 MiB/s [2024-12-09T10:05:46.006Z] [2024-12-09 10:05:39.380013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.962 [2024-12-09 10:05:39.380632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.962 [2024-12-09 10:05:39.380641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.380957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.380971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.963 [2024-12-09 10:05:39.381350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.963 [2024-12-09 10:05:39.381358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.964 [2024-12-09 10:05:39.381377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.964 [2024-12-09 10:05:39.381968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.964 [2024-12-09 10:05:39.381979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.965 [2024-12-09 10:05:39.381987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.381997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.965 [2024-12-09 10:05:39.382010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.965 [2024-12-09 10:05:39.382065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.965 [2024-12-09 10:05:39.382083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.965 [2024-12-09 10:05:39.382103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.965 [2024-12-09 10:05:39.382122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.965 [2024-12-09 10:05:39.382141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.965 [2024-12-09 10:05:39.382159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.965 [2024-12-09 10:05:39.382177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:38.965 [2024-12-09 10:05:39.382197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:38.965 [2024-12-09 10:05:39.382627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:54:38.965 [2024-12-09 10:05:39.382668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:54:38.965 [2024-12-09 10:05:39.382675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114640 len:8 PRP1 0x0 PRP2 0x0 00:54:38.965 [2024-12-09 10:05:39.382683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.965 [2024-12-09 10:05:39.382755] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:54:38.965 [2024-12-09 10:05:39.382805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:54:38.966 [2024-12-09 10:05:39.382816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.966 [2024-12-09 10:05:39.382834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:54:38.966 [2024-12-09 10:05:39.382843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.966 [2024-12-09 10:05:39.382852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:54:38.966 [2024-12-09 10:05:39.382862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.966 [2024-12-09 10:05:39.382873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:54:38.966 [2024-12-09 10:05:39.382882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:38.966 [2024-12-09 10:05:39.382892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:54:38.966 [2024-12-09 10:05:39.382925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7ca00 (9): Bad file descriptor 00:54:38.966 [2024-12-09 10:05:39.385687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:54:38.966 [2024-12-09 10:05:39.410502] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:54:38.966 10086.20 IOPS, 39.40 MiB/s [2024-12-09T10:05:46.010Z] 10091.27 IOPS, 39.42 MiB/s [2024-12-09T10:05:46.010Z] 10010.67 IOPS, 39.10 MiB/s [2024-12-09T10:05:46.010Z] 9950.46 IOPS, 38.87 MiB/s [2024-12-09T10:05:46.010Z] 9894.64 IOPS, 38.65 MiB/s [2024-12-09T10:05:46.010Z] 9908.40 IOPS, 38.70 MiB/s 00:54:38.966 Latency(us) 00:54:38.966 [2024-12-09T10:05:46.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:38.966 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:54:38.966 Verification LBA range: start 0x0 length 0x4000 00:54:38.966 NVMe0n1 : 15.01 9911.94 38.72 292.45 0.00 12520.41 504.40 17056.53 00:54:38.966 [2024-12-09T10:05:46.010Z] =================================================================================================================== 00:54:38.966 [2024-12-09T10:05:46.010Z] Total : 9911.94 38.72 292.45 0.00 12520.41 504.40 17056.53 00:54:38.966 Received shutdown signal, test time was about 15.000000 seconds 00:54:38.966 00:54:38.966 Latency(us) 00:54:38.966 [2024-12-09T10:05:46.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:38.966 [2024-12-09T10:05:46.010Z] =================================================================================================================== 00:54:38.966 [2024-12-09T10:05:46.010Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:54:38.966 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:54:38.966 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:54:38.966 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:54:38.966 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:54:38.966 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=89546 00:54:38.966 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 89546 /var/tmp/bdevperf.sock 00:54:38.966 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 89546 ']' 00:54:38.966 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:38.966 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:38.966 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:38.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:38.966 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:38.966 10:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:54:39.534 10:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:39.534 10:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:54:39.534 10:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:54:39.793 [2024-12-09 10:05:46.707490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:54:39.793 10:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:54:40.052 [2024-12-09 10:05:46.923576] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:54:40.052 10:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:54:40.311 NVMe0n1 00:54:40.311 10:05:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:54:40.570 00:54:40.570 10:05:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:54:40.828 00:54:40.828 10:05:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:54:40.828 10:05:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:54:41.087 10:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:54:41.345 10:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:54:44.629 10:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:54:44.629 10:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:54:44.629 10:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:54:44.629 10:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=89682 00:54:44.629 10:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 89682 00:54:45.564 { 00:54:45.564 "results": [ 00:54:45.564 { 00:54:45.564 "job": "NVMe0n1", 00:54:45.564 "core_mask": "0x1", 00:54:45.564 "workload": "verify", 00:54:45.564 "status": "finished", 00:54:45.564 "verify_range": { 00:54:45.564 "start": 0, 00:54:45.564 "length": 16384 00:54:45.564 }, 00:54:45.564 "queue_depth": 128, 00:54:45.564 "io_size": 4096, 00:54:45.564 "runtime": 1.008286, 00:54:45.564 "iops": 10500.988806747291, 00:54:45.564 "mibps": 41.019487526356606, 00:54:45.564 "io_failed": 0, 00:54:45.564 "io_timeout": 0, 00:54:45.564 "avg_latency_us": 12124.273994948553, 00:54:45.564 "min_latency_us": 1402.2986899563318, 00:54:45.564 "max_latency_us": 14480.880349344978 00:54:45.564 } 00:54:45.564 ], 00:54:45.564 "core_count": 1 00:54:45.564 } 00:54:45.823 10:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:54:45.823 [2024-12-09 10:05:45.537320] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:45.823 [2024-12-09 10:05:45.537426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89546 ] 00:54:45.823 [2024-12-09 10:05:45.685114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:45.823 [2024-12-09 10:05:45.759184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:45.823 [2024-12-09 10:05:48.239413] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:54:45.823 [2024-12-09 10:05:48.239552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:54:45.823 [2024-12-09 10:05:48.239570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:45.823 [2024-12-09 10:05:48.239585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:54:45.823 [2024-12-09 10:05:48.239594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:45.823 [2024-12-09 10:05:48.239604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:54:45.823 [2024-12-09 10:05:48.239613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:45.823 [2024-12-09 10:05:48.239623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:54:45.823 [2024-12-09 10:05:48.239633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:45.823 [2024-12-09 10:05:48.239644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:54:45.823 [2024-12-09 10:05:48.239691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:54:45.823 [2024-12-09 10:05:48.239716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1534a00 (9): Bad file descriptor 00:54:45.823 [2024-12-09 10:05:48.242283] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:54:45.823 Running I/O for 1 seconds... 00:54:45.823 10413.00 IOPS, 40.68 MiB/s 00:54:45.823 Latency(us) 00:54:45.823 [2024-12-09T10:05:52.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:45.823 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:54:45.823 Verification LBA range: start 0x0 length 0x4000 00:54:45.823 NVMe0n1 : 1.01 10500.99 41.02 0.00 0.00 12124.27 1402.30 14480.88 00:54:45.823 [2024-12-09T10:05:52.867Z] =================================================================================================================== 00:54:45.823 [2024-12-09T10:05:52.867Z] Total : 10500.99 41.02 0.00 0.00 12124.27 1402.30 14480.88 00:54:45.823 10:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:54:45.823 10:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:54:45.823 10:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:54:46.081 10:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:54:46.081 10:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:54:46.339 10:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:54:46.597 10:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:54:49.880 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:54:49.880 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:54:49.880 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 89546 00:54:49.880 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 89546 ']' 00:54:49.880 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 89546 00:54:49.880 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:54:49.880 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:49.880 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89546 00:54:49.880 killing process with pid 89546 00:54:49.881 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:49.881 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:49.881 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89546' 00:54:49.881 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 89546 00:54:49.881 10:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 89546 00:54:50.139 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:54:50.139 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:54:50.397 rmmod nvme_tcp 00:54:50.397 rmmod nvme_fabrics 00:54:50.397 rmmod nvme_keyring 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 89182 ']' 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 89182 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 89182 ']' 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 89182 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89182 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89182' 00:54:50.397 killing process with pid 89182 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 89182 00:54:50.397 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 89182 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:54:50.656 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:54:50.914 00:54:50.914 real 0m32.646s 00:54:50.914 user 2m5.497s 00:54:50.914 sys 0m4.754s 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:54:50.914 ************************************ 00:54:50.914 END TEST nvmf_failover 00:54:50.914 ************************************ 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:54:50.914 ************************************ 00:54:50.914 START TEST nvmf_host_discovery 00:54:50.914 ************************************ 00:54:50.914 10:05:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:54:51.174 * Looking for test storage... 00:54:51.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:54:51.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:51.174 --rc genhtml_branch_coverage=1 00:54:51.174 --rc genhtml_function_coverage=1 00:54:51.174 --rc genhtml_legend=1 00:54:51.174 --rc geninfo_all_blocks=1 00:54:51.174 --rc geninfo_unexecuted_blocks=1 00:54:51.174 00:54:51.174 ' 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:54:51.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:51.174 --rc genhtml_branch_coverage=1 00:54:51.174 --rc genhtml_function_coverage=1 00:54:51.174 --rc genhtml_legend=1 00:54:51.174 --rc geninfo_all_blocks=1 00:54:51.174 --rc geninfo_unexecuted_blocks=1 00:54:51.174 00:54:51.174 ' 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:54:51.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:51.174 --rc genhtml_branch_coverage=1 00:54:51.174 --rc genhtml_function_coverage=1 00:54:51.174 --rc genhtml_legend=1 00:54:51.174 --rc geninfo_all_blocks=1 00:54:51.174 --rc geninfo_unexecuted_blocks=1 00:54:51.174 00:54:51.174 ' 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:54:51.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:51.174 --rc genhtml_branch_coverage=1 00:54:51.174 --rc genhtml_function_coverage=1 00:54:51.174 --rc genhtml_legend=1 00:54:51.174 --rc geninfo_all_blocks=1 00:54:51.174 --rc geninfo_unexecuted_blocks=1 00:54:51.174 00:54:51.174 ' 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:54:51.174 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:54:51.175 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:51.175 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:54:51.434 Cannot find device "nvmf_init_br" 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:54:51.434 Cannot find device "nvmf_init_br2" 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:54:51.434 Cannot find device "nvmf_tgt_br" 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:54:51.434 Cannot find device "nvmf_tgt_br2" 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:54:51.434 Cannot find device "nvmf_init_br" 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:54:51.434 Cannot find device "nvmf_init_br2" 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:54:51.434 Cannot find device "nvmf_tgt_br" 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:54:51.434 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:54:51.435 Cannot find device "nvmf_tgt_br2" 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:54:51.435 Cannot find device "nvmf_br" 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:54:51.435 Cannot find device "nvmf_init_if" 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:54:51.435 Cannot find device "nvmf_init_if2" 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:51.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:51.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:51.435 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:54:51.693 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:54:51.693 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:51.693 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.134 ms 00:54:51.693 00:54:51.693 --- 10.0.0.3 ping statistics --- 00:54:51.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:51.694 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:54:51.694 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:54:51.694 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 00:54:51.694 00:54:51.694 --- 10.0.0.4 ping statistics --- 00:54:51.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:51.694 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:51.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:51.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:54:51.694 00:54:51.694 --- 10.0.0.1 ping statistics --- 00:54:51.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:51.694 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:54:51.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:51.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:54:51.694 00:54:51.694 --- 10.0.0.2 ping statistics --- 00:54:51.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:51.694 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:51.694 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:51.952 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=90035 00:54:51.952 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:54:51.952 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 90035 00:54:51.952 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 90035 ']' 00:54:51.952 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:51.952 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:51.952 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:51.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:51.952 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:51.952 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:51.952 [2024-12-09 10:05:58.801203] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:51.952 [2024-12-09 10:05:58.801725] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:51.952 [2024-12-09 10:05:58.941220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:52.211 [2024-12-09 10:05:58.997688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:52.211 [2024-12-09 10:05:58.997744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:52.211 [2024-12-09 10:05:58.997751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:52.211 [2024-12-09 10:05:58.997756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:52.211 [2024-12-09 10:05:58.997760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:52.211 [2024-12-09 10:05:58.998044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:52.777 [2024-12-09 10:05:59.765008] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:52.777 [2024-12-09 10:05:59.777085] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:52.777 null0 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:52.777 null1 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:52.777 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:53.036 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=90085 00:54:53.036 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:54:53.036 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 90085 /tmp/host.sock 00:54:53.036 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 90085 ']' 00:54:53.036 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:54:53.036 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:53.036 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:54:53.036 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:54:53.036 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:53.036 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:53.036 [2024-12-09 10:05:59.873516] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:54:53.036 [2024-12-09 10:05:59.873582] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90085 ] 00:54:53.036 [2024-12-09 10:06:00.002587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:53.036 [2024-12-09 10:06:00.066086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:53.973 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:53.974 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:53.974 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:54:53.974 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:54:53.974 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:53.974 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:54.232 [2024-12-09 10:06:01.130758] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:54.232 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:54:54.233 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:54.233 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:54:54.491 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:54:55.057 [2024-12-09 10:06:01.823752] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:54:55.057 [2024-12-09 10:06:01.823796] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:54:55.057 [2024-12-09 10:06:01.823812] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:54:55.057 [2024-12-09 10:06:01.909671] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:54:55.057 [2024-12-09 10:06:01.964001] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:54:55.057 [2024-12-09 10:06:01.964752] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1e28580:1 started. 00:54:55.057 [2024-12-09 10:06:01.966184] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:54:55.057 [2024-12-09 10:06:01.966206] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:54:55.057 [2024-12-09 10:06:01.972274] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1e28580 was disconnected and freed. delete nvme_qpair. 00:54:55.316 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:55.316 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:54:55.316 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:54:55.316 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:54:55.316 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:54:55.316 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:54:55.316 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.316 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:55.316 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:54:55.316 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:55.575 [2024-12-09 10:06:02.524062] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1e28920:1 started. 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.575 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:54:55.576 [2024-12-09 10:06:02.531162] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1e28920 was disconnected and freed. delete nvme_qpair. 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:55.576 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:55.835 [2024-12-09 10:06:02.636402] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:54:55.835 [2024-12-09 10:06:02.636962] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:54:55.835 [2024-12-09 10:06:02.636990] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:55.835 [2024-12-09 10:06:02.723346] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:54:55.835 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.835 [2024-12-09 10:06:02.785807] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:54:55.835 [2024-12-09 10:06:02.785867] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:54:55.836 [2024-12-09 10:06:02.785876] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:54:55.836 [2024-12-09 10:06:02.785881] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:54:55.836 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:54:55.836 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:54:56.783 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:56.783 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:54:56.783 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:54:56.783 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:54:56.783 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:54:56.783 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:54:56.783 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:56.783 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:56.783 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:54:56.783 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:57.044 [2024-12-09 10:06:03.899268] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:54:57.044 [2024-12-09 10:06:03.899306] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:54:57.044 [2024-12-09 10:06:03.901900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:54:57.044 [2024-12-09 10:06:03.901932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:57.044 [2024-12-09 10:06:03.901941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:54:57.044 [2024-12-09 10:06:03.901947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:57.044 [2024-12-09 10:06:03.901954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:54:57.044 [2024-12-09 10:06:03.901960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:57.044 [2024-12-09 10:06:03.901966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:54:57.044 [2024-12-09 10:06:03.901971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:57.044 [2024-12-09 10:06:03.901977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0850 is same with the state(6) to be set 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:57.044 [2024-12-09 10:06:03.911850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da0850 (9): Bad file descriptor 00:54:57.044 [2024-12-09 10:06:03.921848] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:54:57.044 [2024-12-09 10:06:03.921870] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:54:57.044 [2024-12-09 10:06:03.921874] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:54:57.044 [2024-12-09 10:06:03.921878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:54:57.044 [2024-12-09 10:06:03.921906] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:54:57.044 [2024-12-09 10:06:03.921976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:57.044 [2024-12-09 10:06:03.921990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da0850 with addr=10.0.0.3, port=4420 00:54:57.044 [2024-12-09 10:06:03.921997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0850 is same with the state(6) to be set 00:54:57.044 [2024-12-09 10:06:03.922008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da0850 (9): Bad file descriptor 00:54:57.044 [2024-12-09 10:06:03.922018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:54:57.044 [2024-12-09 10:06:03.922023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:54:57.044 [2024-12-09 10:06:03.922030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:54:57.044 [2024-12-09 10:06:03.922035] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:54:57.044 [2024-12-09 10:06:03.922064] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:54:57.044 [2024-12-09 10:06:03.922070] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:57.044 [2024-12-09 10:06:03.931895] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:54:57.044 [2024-12-09 10:06:03.931915] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:54:57.044 [2024-12-09 10:06:03.931918] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:54:57.044 [2024-12-09 10:06:03.931922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:54:57.044 [2024-12-09 10:06:03.931942] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:54:57.044 [2024-12-09 10:06:03.931982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:57.044 [2024-12-09 10:06:03.931993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da0850 with addr=10.0.0.3, port=4420 00:54:57.044 [2024-12-09 10:06:03.932000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0850 is same with the state(6) to be set 00:54:57.044 [2024-12-09 10:06:03.932009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da0850 (9): Bad file descriptor 00:54:57.044 [2024-12-09 10:06:03.932024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:54:57.044 [2024-12-09 10:06:03.932030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:54:57.044 [2024-12-09 10:06:03.932036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:54:57.044 [2024-12-09 10:06:03.932041] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:54:57.044 [2024-12-09 10:06:03.932044] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:54:57.044 [2024-12-09 10:06:03.932047] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:54:57.044 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:57.045 [2024-12-09 10:06:03.941934] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:54:57.045 [2024-12-09 10:06:03.941954] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:54:57.045 [2024-12-09 10:06:03.941958] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:54:57.045 [2024-12-09 10:06:03.941961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:54:57.045 [2024-12-09 10:06:03.941984] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:54:57.045 [2024-12-09 10:06:03.942032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:57.045 [2024-12-09 10:06:03.942052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da0850 with addr=10.0.0.3, port=4420 00:54:57.045 [2024-12-09 10:06:03.942061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0850 is same with the state(6) to be set 00:54:57.045 [2024-12-09 10:06:03.942075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da0850 (9): Bad file descriptor 00:54:57.045 [2024-12-09 10:06:03.942088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:54:57.045 [2024-12-09 10:06:03.942095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:54:57.045 [2024-12-09 10:06:03.942103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:54:57.045 [2024-12-09 10:06:03.942110] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:54:57.045 [2024-12-09 10:06:03.942115] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:54:57.045 [2024-12-09 10:06:03.942120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:54:57.045 [2024-12-09 10:06:03.952372] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:54:57.045 [2024-12-09 10:06:03.952397] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:54:57.045 [2024-12-09 10:06:03.952401] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:54:57.045 [2024-12-09 10:06:03.952404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:54:57.045 [2024-12-09 10:06:03.952429] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:54:57.045 [2024-12-09 10:06:03.952492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:57.045 [2024-12-09 10:06:03.952516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da0850 with addr=10.0.0.3, port=4420 00:54:57.045 [2024-12-09 10:06:03.952525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0850 is same with the state(6) to be set 00:54:57.045 [2024-12-09 10:06:03.952537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da0850 (9): Bad file descriptor 00:54:57.045 [2024-12-09 10:06:03.952547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:54:57.045 [2024-12-09 10:06:03.952556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:54:57.045 [2024-12-09 10:06:03.952564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:54:57.045 [2024-12-09 10:06:03.952572] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:54:57.045 [2024-12-09 10:06:03.952578] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:54:57.045 [2024-12-09 10:06:03.952583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:54:57.045 [2024-12-09 10:06:03.962420] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:54:57.045 [2024-12-09 10:06:03.962443] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:54:57.045 [2024-12-09 10:06:03.962449] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:54:57.045 [2024-12-09 10:06:03.962454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:54:57.045 [2024-12-09 10:06:03.962503] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:54:57.045 [2024-12-09 10:06:03.962560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:57.045 [2024-12-09 10:06:03.962579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da0850 with addr=10.0.0.3, port=4420 00:54:57.045 [2024-12-09 10:06:03.962590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0850 is same with the state(6) to be set 00:54:57.045 [2024-12-09 10:06:03.962605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da0850 (9): Bad file descriptor 00:54:57.045 [2024-12-09 10:06:03.962619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:54:57.045 [2024-12-09 10:06:03.962634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:54:57.045 [2024-12-09 10:06:03.962644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:54:57.045 [2024-12-09 10:06:03.962653] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:54:57.045 [2024-12-09 10:06:03.962659] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:54:57.045 [2024-12-09 10:06:03.962664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:54:57.045 [2024-12-09 10:06:03.972495] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:54:57.045 [2024-12-09 10:06:03.972523] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:54:57.045 [2024-12-09 10:06:03.972538] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:54:57.045 [2024-12-09 10:06:03.972543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:54:57.045 [2024-12-09 10:06:03.972587] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:54:57.045 [2024-12-09 10:06:03.972640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:57.045 [2024-12-09 10:06:03.972657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da0850 with addr=10.0.0.3, port=4420 00:54:57.045 [2024-12-09 10:06:03.972668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0850 is same with the state(6) to be set 00:54:57.045 [2024-12-09 10:06:03.972683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da0850 (9): Bad file descriptor 00:54:57.045 [2024-12-09 10:06:03.972696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:54:57.045 [2024-12-09 10:06:03.972704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:54:57.045 [2024-12-09 10:06:03.972714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:54:57.045 [2024-12-09 10:06:03.972723] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:54:57.045 [2024-12-09 10:06:03.972728] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:54:57.045 [2024-12-09 10:06:03.972734] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:57.045 [2024-12-09 10:06:03.982577] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:54:57.045 [2024-12-09 10:06:03.982600] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:54:57.045 [2024-12-09 10:06:03.982603] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:54:57.045 [2024-12-09 10:06:03.982607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:54:57.045 [2024-12-09 10:06:03.982629] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:54:57.045 [2024-12-09 10:06:03.982670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:57.045 [2024-12-09 10:06:03.982682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da0850 with addr=10.0.0.3, port=4420 00:54:57.045 [2024-12-09 10:06:03.982689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0850 is same with the state(6) to be set 00:54:57.045 [2024-12-09 10:06:03.982698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da0850 (9): Bad file descriptor 00:54:57.045 [2024-12-09 10:06:03.982707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:54:57.045 [2024-12-09 10:06:03.982712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:54:57.045 [2024-12-09 10:06:03.982718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:54:57.045 [2024-12-09 10:06:03.982723] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:54:57.045 [2024-12-09 10:06:03.982727] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:54:57.045 [2024-12-09 10:06:03.982729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:54:57.045 [2024-12-09 10:06:03.985360] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:54:57.045 [2024-12-09 10:06:03.985383] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:54:57.045 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:57.046 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:54:57.046 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:54:57.046 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:57.046 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:57.046 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:54:57.046 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:54:57.046 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:54:57.046 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:57.046 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:57.046 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:54:57.046 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:54:57.046 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:54:57.046 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:57.305 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:58.245 [2024-12-09 10:06:05.267459] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:54:58.245 [2024-12-09 10:06:05.267508] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:54:58.245 [2024-12-09 10:06:05.267524] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:54:58.507 [2024-12-09 10:06:05.354363] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:54:58.507 [2024-12-09 10:06:05.412578] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:54:58.507 [2024-12-09 10:06:05.413107] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1e228d0:1 started. 00:54:58.507 [2024-12-09 10:06:05.414900] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:54:58.507 [2024-12-09 10:06:05.414934] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:54:58.507 [2024-12-09 10:06:05.416047] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1e228d0 was disconnected and freed. delete nvme_qpair. 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:58.507 2024/12/09 10:06:05 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:54:58.507 request: 00:54:58.507 { 00:54:58.507 "method": "bdev_nvme_start_discovery", 00:54:58.507 "params": { 00:54:58.507 "name": "nvme", 00:54:58.507 "trtype": "tcp", 00:54:58.507 "traddr": "10.0.0.3", 00:54:58.507 "adrfam": "ipv4", 00:54:58.507 "trsvcid": "8009", 00:54:58.507 "hostnqn": "nqn.2021-12.io.spdk:test", 00:54:58.507 "wait_for_attach": true 00:54:58.507 } 00:54:58.507 } 00:54:58.507 Got JSON-RPC error response 00:54:58.507 GoRPCClient: error on JSON-RPC call 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:54:58.507 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:58.508 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:58.766 2024/12/09 10:06:05 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:54:58.766 request: 00:54:58.766 { 00:54:58.766 "method": "bdev_nvme_start_discovery", 00:54:58.766 "params": { 00:54:58.766 "name": "nvme_second", 00:54:58.766 "trtype": "tcp", 00:54:58.766 "traddr": "10.0.0.3", 00:54:58.766 "adrfam": "ipv4", 00:54:58.766 "trsvcid": "8009", 00:54:58.766 "hostnqn": "nqn.2021-12.io.spdk:test", 00:54:58.766 "wait_for_attach": true 00:54:58.766 } 00:54:58.766 } 00:54:58.766 Got JSON-RPC error response 00:54:58.766 GoRPCClient: error on JSON-RPC call 00:54:58.766 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:54:58.766 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:54:58.766 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:54:58.766 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:54:58.766 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:54:58.766 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:54:58.766 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:58.767 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:54:59.702 [2024-12-09 10:06:06.672787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:59.702 [2024-12-09 10:06:06.672833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e22aa0 with addr=10.0.0.3, port=8010 00:54:59.702 [2024-12-09 10:06:06.672851] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:54:59.702 [2024-12-09 10:06:06.672858] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:54:59.702 [2024-12-09 10:06:06.672865] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:55:00.638 [2024-12-09 10:06:07.670868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:55:00.638 [2024-12-09 10:06:07.670914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e22aa0 with addr=10.0.0.3, port=8010 00:55:00.638 [2024-12-09 10:06:07.670932] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:55:00.638 [2024-12-09 10:06:07.670939] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:55:00.638 [2024-12-09 10:06:07.670945] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:55:02.019 [2024-12-09 10:06:08.668827] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:55:02.019 2024/12/09 10:06:08 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:55:02.019 request: 00:55:02.019 { 00:55:02.019 "method": "bdev_nvme_start_discovery", 00:55:02.019 "params": { 00:55:02.019 "name": "nvme_second", 00:55:02.019 "trtype": "tcp", 00:55:02.019 "traddr": "10.0.0.3", 00:55:02.019 "adrfam": "ipv4", 00:55:02.019 "trsvcid": "8010", 00:55:02.019 "hostnqn": "nqn.2021-12.io.spdk:test", 00:55:02.019 "wait_for_attach": false, 00:55:02.019 "attach_timeout_ms": 3000 00:55:02.019 } 00:55:02.019 } 00:55:02.019 Got JSON-RPC error response 00:55:02.019 GoRPCClient: error on JSON-RPC call 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 90085 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:02.019 rmmod nvme_tcp 00:55:02.019 rmmod nvme_fabrics 00:55:02.019 rmmod nvme_keyring 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 90035 ']' 00:55:02.019 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 90035 00:55:02.020 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 90035 ']' 00:55:02.020 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 90035 00:55:02.020 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:55:02.020 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:02.020 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90035 00:55:02.020 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:55:02.020 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:55:02.020 killing process with pid 90035 00:55:02.020 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90035' 00:55:02.020 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 90035 00:55:02.020 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 90035 00:55:02.020 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:02.020 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:02.020 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:02.020 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:55:02.020 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:02.020 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:55:02.020 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:55:02.020 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:02.020 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:55:02.020 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:55:02.020 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:55:02.279 00:55:02.279 real 0m11.393s 00:55:02.279 user 0m21.164s 00:55:02.279 sys 0m1.923s 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:02.279 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:02.279 ************************************ 00:55:02.279 END TEST nvmf_host_discovery 00:55:02.279 ************************************ 00:55:02.539 10:06:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:55:02.539 10:06:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:55:02.539 10:06:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:02.539 10:06:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:55:02.539 ************************************ 00:55:02.539 START TEST nvmf_host_multipath_status 00:55:02.539 ************************************ 00:55:02.539 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:55:02.539 * Looking for test storage... 00:55:02.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:55:02.539 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:02.539 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:55:02.539 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:02.539 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:02.539 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:02.539 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:02.539 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:02.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:02.800 --rc genhtml_branch_coverage=1 00:55:02.800 --rc genhtml_function_coverage=1 00:55:02.800 --rc genhtml_legend=1 00:55:02.800 --rc geninfo_all_blocks=1 00:55:02.800 --rc geninfo_unexecuted_blocks=1 00:55:02.800 00:55:02.800 ' 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:02.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:02.800 --rc genhtml_branch_coverage=1 00:55:02.800 --rc genhtml_function_coverage=1 00:55:02.800 --rc genhtml_legend=1 00:55:02.800 --rc geninfo_all_blocks=1 00:55:02.800 --rc geninfo_unexecuted_blocks=1 00:55:02.800 00:55:02.800 ' 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:02.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:02.800 --rc genhtml_branch_coverage=1 00:55:02.800 --rc genhtml_function_coverage=1 00:55:02.800 --rc genhtml_legend=1 00:55:02.800 --rc geninfo_all_blocks=1 00:55:02.800 --rc geninfo_unexecuted_blocks=1 00:55:02.800 00:55:02.800 ' 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:02.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:02.800 --rc genhtml_branch_coverage=1 00:55:02.800 --rc genhtml_function_coverage=1 00:55:02.800 --rc genhtml_legend=1 00:55:02.800 --rc geninfo_all_blocks=1 00:55:02.800 --rc geninfo_unexecuted_blocks=1 00:55:02.800 00:55:02.800 ' 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:02.800 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:02.800 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:55:02.801 Cannot find device "nvmf_init_br" 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:55:02.801 Cannot find device "nvmf_init_br2" 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:55:02.801 Cannot find device "nvmf_tgt_br" 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:55:02.801 Cannot find device "nvmf_tgt_br2" 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:55:02.801 Cannot find device "nvmf_init_br" 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:55:02.801 Cannot find device "nvmf_init_br2" 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:55:02.801 Cannot find device "nvmf_tgt_br" 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:55:02.801 Cannot find device "nvmf_tgt_br2" 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:55:02.801 Cannot find device "nvmf_br" 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:55:02.801 Cannot find device "nvmf_init_if" 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:55:02.801 Cannot find device "nvmf_init_if2" 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:02.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:02.801 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:03.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:55:03.061 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:03.062 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:55:03.062 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:55:03.062 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:03.062 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:55:03.062 00:55:03.062 --- 10.0.0.3 ping statistics --- 00:55:03.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:03.062 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:55:03.062 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:55:03.062 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:55:03.062 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.125 ms 00:55:03.062 00:55:03.062 --- 10.0.0.4 ping statistics --- 00:55:03.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:03.062 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:03.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:03.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:55:03.062 00:55:03.062 --- 10.0.0.1 ping statistics --- 00:55:03.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:03.062 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:55:03.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:03.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:55:03.062 00:55:03.062 --- 10.0.0.2 ping statistics --- 00:55:03.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:03.062 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=90631 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 90631 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 90631 ']' 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:03.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:03.062 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:55:03.322 [2024-12-09 10:06:10.120085] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:55:03.322 [2024-12-09 10:06:10.120160] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:03.322 [2024-12-09 10:06:10.272030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:55:03.322 [2024-12-09 10:06:10.326657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:03.322 [2024-12-09 10:06:10.326695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:03.322 [2024-12-09 10:06:10.326703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:03.322 [2024-12-09 10:06:10.326708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:03.322 [2024-12-09 10:06:10.326713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:03.322 [2024-12-09 10:06:10.327545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:03.322 [2024-12-09 10:06:10.327544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:04.269 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:04.269 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:55:04.269 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:04.269 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:04.269 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:55:04.269 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:04.269 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90631 00:55:04.269 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:55:04.269 [2024-12-09 10:06:11.284909] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:04.269 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:55:04.528 Malloc0 00:55:04.528 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:55:04.787 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:55:05.046 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:55:05.306 [2024-12-09 10:06:12.194244] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:55:05.306 10:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:55:05.565 [2024-12-09 10:06:12.397987] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:55:05.565 10:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:55:05.565 10:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90729 00:55:05.565 10:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:55:05.565 10:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90729 /var/tmp/bdevperf.sock 00:55:05.565 10:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 90729 ']' 00:55:05.565 10:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:55:05.565 10:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:05.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:55:05.565 10:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:55:05.565 10:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:05.565 10:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:55:06.504 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:06.504 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:55:06.504 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:55:06.763 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:55:07.022 Nvme0n1 00:55:07.022 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:55:07.282 Nvme0n1 00:55:07.282 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:55:07.282 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:55:09.822 10:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:55:09.822 10:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:55:09.822 10:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:55:09.822 10:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:55:11.199 10:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:55:11.199 10:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:55:11.199 10:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:11.199 10:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:55:11.199 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:11.199 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:55:11.199 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:55:11.199 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:11.562 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:11.562 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:55:11.562 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:55:11.562 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:11.562 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:11.562 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:55:11.562 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:11.562 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:55:11.836 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:11.836 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:55:11.836 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:55:11.836 10:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:12.096 10:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:12.096 10:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:55:12.096 10:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:12.096 10:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:55:12.355 10:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:12.355 10:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:55:12.355 10:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:55:12.614 10:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:55:12.873 10:06:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:55:13.810 10:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:55:13.810 10:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:55:13.810 10:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:13.810 10:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:55:14.069 10:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:14.069 10:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:55:14.069 10:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:14.069 10:06:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:55:14.327 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:14.327 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:55:14.327 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:14.327 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:55:14.586 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:14.586 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:55:14.586 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:14.586 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:55:14.586 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:14.587 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:55:14.587 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:14.587 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:55:14.845 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:14.845 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:55:14.845 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:14.845 10:06:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:55:15.104 10:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:15.104 10:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:55:15.104 10:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:55:15.362 10:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:55:15.621 10:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:55:16.556 10:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:55:16.556 10:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:55:16.556 10:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:16.556 10:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:55:16.815 10:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:16.815 10:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:55:16.815 10:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:16.815 10:06:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:55:17.074 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:17.074 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:55:17.075 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:17.075 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:55:17.333 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:17.333 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:55:17.333 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:17.333 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:55:17.593 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:17.593 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:55:17.593 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:17.593 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:55:17.873 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:17.873 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:55:17.873 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:17.873 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:55:18.132 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:18.132 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:55:18.132 10:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:55:18.390 10:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:55:18.649 10:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:55:19.586 10:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:55:19.586 10:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:55:19.586 10:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:19.586 10:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:55:19.844 10:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:19.844 10:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:55:19.844 10:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:19.844 10:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:55:20.105 10:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:20.105 10:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:55:20.105 10:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:20.105 10:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:55:20.364 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:20.364 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:55:20.364 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:55:20.364 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:20.623 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:20.623 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:55:20.623 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:20.623 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:55:20.881 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:20.881 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:55:20.881 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:20.881 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:55:21.140 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:21.140 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:55:21.140 10:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:55:21.140 10:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:55:21.399 10:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:55:22.793 10:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:55:22.793 10:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:55:22.793 10:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:22.793 10:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:55:22.793 10:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:22.793 10:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:55:22.793 10:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:22.793 10:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:55:23.051 10:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:23.051 10:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:55:23.051 10:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:55:23.051 10:06:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:23.311 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:23.311 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:55:23.311 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:23.311 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:55:23.570 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:23.570 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:55:23.570 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:23.570 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:55:23.570 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:23.570 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:55:23.570 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:55:23.570 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:23.830 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:23.830 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:55:23.830 10:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:55:24.089 10:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:55:24.348 10:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:55:25.289 10:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:55:25.289 10:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:55:25.289 10:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:25.289 10:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:55:25.548 10:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:25.548 10:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:55:25.548 10:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:25.548 10:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:55:25.808 10:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:25.808 10:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:55:25.808 10:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:25.808 10:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:55:26.068 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:26.068 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:55:26.068 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:26.068 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:55:26.327 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:26.327 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:55:26.327 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:26.327 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:55:26.587 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:26.587 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:55:26.587 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:26.587 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:55:26.846 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:26.846 10:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:55:27.105 10:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:55:27.105 10:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:55:27.365 10:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:55:27.624 10:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:55:28.562 10:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:55:28.562 10:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:55:28.562 10:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:55:28.562 10:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:28.821 10:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:28.821 10:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:55:28.821 10:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:28.821 10:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:55:29.079 10:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:29.079 10:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:55:29.079 10:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:29.079 10:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:55:29.339 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:29.339 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:55:29.339 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:55:29.339 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:29.597 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:29.597 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:55:29.597 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:29.597 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:55:29.598 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:29.598 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:55:29.598 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:29.598 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:55:29.857 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:29.857 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:55:29.857 10:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:55:30.117 10:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:55:30.393 10:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:55:31.330 10:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:55:31.330 10:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:55:31.330 10:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:31.330 10:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:55:31.589 10:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:31.589 10:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:55:31.589 10:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:31.589 10:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:55:31.890 10:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:31.890 10:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:55:31.890 10:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:31.890 10:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:55:32.148 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:32.148 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:55:32.148 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:32.148 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:55:32.406 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:32.406 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:55:32.406 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:55:32.406 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:32.665 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:32.665 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:55:32.665 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:32.665 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:55:32.924 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:32.924 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:55:32.924 10:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:55:33.183 10:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:55:33.442 10:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:55:34.379 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:55:34.379 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:55:34.379 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:34.379 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:55:34.637 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:34.637 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:55:34.637 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:34.637 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:55:34.896 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:34.896 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:55:34.896 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:34.896 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:55:35.153 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:35.153 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:55:35.153 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:35.153 10:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:55:35.153 10:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:35.153 10:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:55:35.153 10:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:55:35.153 10:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:35.411 10:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:35.412 10:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:55:35.412 10:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:35.412 10:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:55:35.671 10:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:35.671 10:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:55:35.671 10:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:55:35.930 10:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:55:36.189 10:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:55:37.126 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:55:37.126 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:55:37.126 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:37.126 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:55:37.384 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:37.384 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:55:37.384 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:37.384 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:55:37.643 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:37.643 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:55:37.643 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:37.643 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:55:37.901 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:37.901 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:55:37.901 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:37.901 10:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:55:38.160 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:38.160 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:55:38.160 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:38.160 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:55:38.419 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:55:38.419 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:55:38.419 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:55:38.420 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:55:38.678 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:55:38.678 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90729 00:55:38.678 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 90729 ']' 00:55:38.678 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 90729 00:55:38.678 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:55:38.678 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:38.679 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90729 00:55:38.679 killing process with pid 90729 00:55:38.679 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:55:38.679 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:55:38.679 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90729' 00:55:38.679 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 90729 00:55:38.679 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 90729 00:55:38.679 { 00:55:38.679 "results": [ 00:55:38.679 { 00:55:38.679 "job": "Nvme0n1", 00:55:38.679 "core_mask": "0x4", 00:55:38.679 "workload": "verify", 00:55:38.679 "status": "terminated", 00:55:38.679 "verify_range": { 00:55:38.679 "start": 0, 00:55:38.679 "length": 16384 00:55:38.679 }, 00:55:38.679 "queue_depth": 128, 00:55:38.679 "io_size": 4096, 00:55:38.679 "runtime": 31.189489, 00:55:38.679 "iops": 8963.660802522285, 00:55:38.679 "mibps": 35.014300009852676, 00:55:38.679 "io_failed": 0, 00:55:38.679 "io_timeout": 0, 00:55:38.679 "avg_latency_us": 14259.906736391878, 00:55:38.679 "min_latency_us": 134.14847161572052, 00:55:38.679 "max_latency_us": 4102725.310043668 00:55:38.679 } 00:55:38.679 ], 00:55:38.679 "core_count": 1 00:55:38.679 } 00:55:38.962 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90729 00:55:38.962 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:55:38.962 [2024-12-09 10:06:12.464966] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:55:38.962 [2024-12-09 10:06:12.465057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90729 ] 00:55:38.962 [2024-12-09 10:06:12.612905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:38.962 [2024-12-09 10:06:12.669213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:55:38.962 Running I/O for 90 seconds... 00:55:38.962 11449.00 IOPS, 44.72 MiB/s [2024-12-09T10:06:46.006Z] 11623.50 IOPS, 45.40 MiB/s [2024-12-09T10:06:46.006Z] 11471.00 IOPS, 44.81 MiB/s [2024-12-09T10:06:46.006Z] 11456.25 IOPS, 44.75 MiB/s [2024-12-09T10:06:46.006Z] 11386.60 IOPS, 44.48 MiB/s [2024-12-09T10:06:46.006Z] 11456.33 IOPS, 44.75 MiB/s [2024-12-09T10:06:46.006Z] 11462.71 IOPS, 44.78 MiB/s [2024-12-09T10:06:46.006Z] 11443.00 IOPS, 44.70 MiB/s [2024-12-09T10:06:46.006Z] 11468.56 IOPS, 44.80 MiB/s [2024-12-09T10:06:46.006Z] 11413.90 IOPS, 44.59 MiB/s [2024-12-09T10:06:46.006Z] 11283.91 IOPS, 44.08 MiB/s [2024-12-09T10:06:46.006Z] 11161.08 IOPS, 43.60 MiB/s [2024-12-09T10:06:46.006Z] 11054.31 IOPS, 43.18 MiB/s [2024-12-09T10:06:46.006Z] [2024-12-09 10:06:28.149723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.962 [2024-12-09 10:06:28.149786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:55:38.962 [2024-12-09 10:06:28.149816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.962 [2024-12-09 10:06:28.149829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:55:38.962 [2024-12-09 10:06:28.149847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.962 [2024-12-09 10:06:28.149857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:55:38.962 [2024-12-09 10:06:28.149873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.962 [2024-12-09 10:06:28.149885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:55:38.962 [2024-12-09 10:06:28.149902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.962 [2024-12-09 10:06:28.149912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:55:38.962 [2024-12-09 10:06:28.149928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.962 [2024-12-09 10:06:28.149938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:55:38.962 [2024-12-09 10:06:28.149954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.962 [2024-12-09 10:06:28.149965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:55:38.962 [2024-12-09 10:06:28.149981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.962 [2024-12-09 10:06:28.149991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:55:38.962 [2024-12-09 10:06:28.150044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.962 [2024-12-09 10:06:28.150057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:55:38.962 [2024-12-09 10:06:28.150073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.962 [2024-12-09 10:06:28.150110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:55:38.962 [2024-12-09 10:06:28.150126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.962 [2024-12-09 10:06:28.150136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.963 [2024-12-09 10:06:28.150545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.963 [2024-12-09 10:06:28.150572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.150982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.150999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.151988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.151999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.152016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.152032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.152049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.152059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.152076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.152088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.152106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.152116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.152133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.152143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.152160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.152170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.152186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.152199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.152216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.152227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:38.963 [2024-12-09 10:06:28.152244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.963 [2024-12-09 10:06:28.152254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.152918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.152929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.964 [2024-12-09 10:06:28.153833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.964 [2024-12-09 10:06:28.153861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.964 [2024-12-09 10:06:28.153888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.964 [2024-12-09 10:06:28.153917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.964 [2024-12-09 10:06:28.153953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.153970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.964 [2024-12-09 10:06:28.153982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:55:38.964 [2024-12-09 10:06:28.154000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.965 [2024-12-09 10:06:28.154508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.154535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.154563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.154960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.154980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.154991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.965 [2024-12-09 10:06:28.155510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:55:38.965 [2024-12-09 10:06:28.155544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.155555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.155586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.155619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.155650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.966 [2024-12-09 10:06:28.155679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.966 [2024-12-09 10:06:28.155710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.155740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.155770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.155806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.155837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.155866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.155894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.155925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.155943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.172919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.172933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:55:38.966 [2024-12-09 10:06:28.173683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.966 [2024-12-09 10:06:28.173710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.173735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.173748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.173781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.173794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.173814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.173826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.173846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.173859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.173880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.173892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.173912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.173925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.173945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.173958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.173978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.173993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.174973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.174986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.175006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.175026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.175047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.967 [2024-12-09 10:06:28.175061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:55:38.967 [2024-12-09 10:06:28.175084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.175540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.175974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.175987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.176008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.176021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.176042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.176055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.176077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.176090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.176110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.176125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.176144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.176159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.176180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.176192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.176213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.176226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.176247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.176260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.176280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.176294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.176320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.968 [2024-12-09 10:06:28.176334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.176354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.176368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.177254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.177279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:55:38.968 [2024-12-09 10:06:28.177302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.968 [2024-12-09 10:06:28.177317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.177977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.177997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.969 [2024-12-09 10:06:28.178175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.969 [2024-12-09 10:06:28.178220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:55:38.969 [2024-12-09 10:06:28.178922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.969 [2024-12-09 10:06:28.178940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.178968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.178989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.179016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.179035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.179063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.179081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.179110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.179128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.179157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.179175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.179203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.179220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.179248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.179309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.179339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.179356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.179385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.179402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.180963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.180991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.181009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.181037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.181056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.181083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.181102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.181130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.181149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.181177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.181194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.181223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.181241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.181269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.181289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.181316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.181335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.181371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.181390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.181418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.181436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.181463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.181499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.181528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.970 [2024-12-09 10:06:28.181547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:55:38.970 [2024-12-09 10:06:28.181574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.181593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.181621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.181639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.181667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.181685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.181713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.181731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.181758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.181777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.181804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.181822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.181850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.181868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.181895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.181914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.181952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.181970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.181998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.971 [2024-12-09 10:06:28.182871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.971 [2024-12-09 10:06:28.182916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.971 [2024-12-09 10:06:28.182962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.182990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.971 [2024-12-09 10:06:28.183008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.183036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.971 [2024-12-09 10:06:28.183054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.183082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.971 [2024-12-09 10:06:28.183109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.183136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.971 [2024-12-09 10:06:28.183155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.183183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.971 [2024-12-09 10:06:28.183203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.183230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.971 [2024-12-09 10:06:28.183249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.183276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.971 [2024-12-09 10:06:28.183306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.183335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.971 [2024-12-09 10:06:28.183355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:55:38.971 [2024-12-09 10:06:28.183382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.971 [2024-12-09 10:06:28.183400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.183428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.183447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.183487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.183507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.183535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.183554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.183581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.183599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.183626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.183645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.183672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.183699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.183728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.183746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.183773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.183791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.183819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.183836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.183864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.183883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.183912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.183930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.183958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.183978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.185962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.185990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.186009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.186057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.186103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.186153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.186210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.186257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.186304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.186352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.186399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.186445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.186504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.186551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.972 [2024-12-09 10:06:28.186599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:55:38.972 [2024-12-09 10:06:28.186628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.972 [2024-12-09 10:06:28.186646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.186675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.186694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.186721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.186740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.186768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.186795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.186824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.186843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.186872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.186893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.186920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.186939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.186967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.186986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.187699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.187718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.188980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.188992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.189010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.189022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.189041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.189054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.189071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.973 [2024-12-09 10:06:28.189085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:38.973 [2024-12-09 10:06:28.189110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.189987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.189999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.190017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.190029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.190048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.190060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.190077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.190090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.190109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.190121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.190139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.190152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.190170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.190182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.190201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.190213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.190231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.190242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.190266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.974 [2024-12-09 10:06:28.190278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.190296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.974 [2024-12-09 10:06:28.190308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.190327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.974 [2024-12-09 10:06:28.190339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:55:38.974 [2024-12-09 10:06:28.190357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.190954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.190966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.191784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.975 [2024-12-09 10:06:28.191808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.191832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.191854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.191873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.191885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.191903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.191915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.191933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.191944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.191963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.191974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.191993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.192005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.192024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.192036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.192055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.192067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.192085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.192105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.192123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.192135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.192155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.192167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.192185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.192198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.192216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.192233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.192252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.975 [2024-12-09 10:06:28.192264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:55:38.975 [2024-12-09 10:06:28.192282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.976 [2024-12-09 10:06:28.192653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.976 [2024-12-09 10:06:28.192683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.192972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.192984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.193973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.193991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.194004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.194023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.194035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.194053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.976 [2024-12-09 10:06:28.194065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:55:38.976 [2024-12-09 10:06:28.194083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.194984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.194996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.195015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.195027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.195047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.195058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.195076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.195088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.195112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.195125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.195143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.195155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.195173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.195185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.195203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.195215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.195234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.195246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.195265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.195276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.195303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.195315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:55:38.977 [2024-12-09 10:06:28.195333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.977 [2024-12-09 10:06:28.195344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.195375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.195405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.195435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.195466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.195513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.195543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.195574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.195604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.195634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.195663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.195693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.195724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.195756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.195785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.195816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.195844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.195882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.195912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.195942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.195972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.195990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.196002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.196020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.196031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.196050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.196063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.196081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.196093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.196112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.196124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.196141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.196154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.196172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.196183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.196201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.196213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.196232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.196244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.196267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.196280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.196297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.196308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.196327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.196339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.197158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.197182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.197204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.978 [2024-12-09 10:06:28.197216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.197235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.197246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.197264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.197276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.197294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.197305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.197324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.197335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.197353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.978 [2024-12-09 10:06:28.197364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:55:38.978 [2024-12-09 10:06:28.197383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.197980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.197998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.979 [2024-12-09 10:06:28.198040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.979 [2024-12-09 10:06:28.198070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.979 [2024-12-09 10:06:28.198466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:55:38.979 [2024-12-09 10:06:28.198482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.198505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.198523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.198534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.198550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.198561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.198579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.198590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.198613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.198625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.198642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.198653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.198669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.198680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.198698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.198711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.199987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.199998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.200015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.200026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.200043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.200054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.200071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.200082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.200100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.200111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.200127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.980 [2024-12-09 10:06:28.200138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:55:38.980 [2024-12-09 10:06:28.200155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.981 [2024-12-09 10:06:28.200874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.200905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.200928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.200954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.200980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.200995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.201005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.201020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.201030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.201044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.201054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.201069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.201079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.201095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.201104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.201119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.201129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.201144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.201154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.201169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.201178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.201193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.201208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.201223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.201233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:55:38.981 [2024-12-09 10:06:28.201250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.981 [2024-12-09 10:06:28.201260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.201276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.982 [2024-12-09 10:06:28.201286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.201301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.982 [2024-12-09 10:06:28.201311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.201326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.982 [2024-12-09 10:06:28.201335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.201350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.982 [2024-12-09 10:06:28.201360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.201375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.982 [2024-12-09 10:06:28.201386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.982 [2024-12-09 10:06:28.202083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.982 [2024-12-09 10:06:28.202112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.982 [2024-12-09 10:06:28.202138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.982 [2024-12-09 10:06:28.202922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.982 [2024-12-09 10:06:28.202965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.202982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.202994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.203013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.203023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.203041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.982 [2024-12-09 10:06:28.203053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:55:38.982 [2024-12-09 10:06:28.203070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.203556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.203568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.208728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.208752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.208773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.208785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.208803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.208814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.208831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.208860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.208877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.208888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.208904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.208915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.208934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.208945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.208963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.208974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.208991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.983 [2024-12-09 10:06:28.209399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:55:38.983 [2024-12-09 10:06:28.209416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.209978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.209988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.984 [2024-12-09 10:06:28.210415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:55:38.984 [2024-12-09 10:06:28.210432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.210953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.210964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.211737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.211759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.211782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.211794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.211812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.211823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.211840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.985 [2024-12-09 10:06:28.211851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.211867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.211879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.211896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.211908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.211924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.211936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.211952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.211963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.211980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.211991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.212008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.212019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.212037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.212048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.212064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.212075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.212091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.212113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.212131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.212143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.212160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.212171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.212188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.212200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.212217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.212228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.212244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.212255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.212272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.212284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.212300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.212311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:55:38.985 [2024-12-09 10:06:28.212330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.985 [2024-12-09 10:06:28.212342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.986 [2024-12-09 10:06:28.212616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.986 [2024-12-09 10:06:28.212644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.212977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.212989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.213973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.213989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.214001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:55:38.986 [2024-12-09 10:06:28.214018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.986 [2024-12-09 10:06:28.214029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.214970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.214982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.215000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.215010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.215027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.215038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.215055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.215066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.215083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.215094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.215112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.215124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.215146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.215158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.215175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.987 [2024-12-09 10:06:28.215189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:55:38.987 [2024-12-09 10:06:28.215207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.988 [2024-12-09 10:06:28.215219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.988 [2024-12-09 10:06:28.215246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.988 [2024-12-09 10:06:28.215274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.988 [2024-12-09 10:06:28.215314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.988 [2024-12-09 10:06:28.215346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.988 [2024-12-09 10:06:28.215375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.988 [2024-12-09 10:06:28.215402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.988 [2024-12-09 10:06:28.215432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.988 [2024-12-09 10:06:28.215460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.215497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.215531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.215560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.215589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.215607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.215619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.988 [2024-12-09 10:06:28.216806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.988 [2024-12-09 10:06:28.216834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.988 [2024-12-09 10:06:28.216863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.988 [2024-12-09 10:06:28.216890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:55:38.988 [2024-12-09 10:06:28.216914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.216925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.216941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.216953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.216970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.216981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.216998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.989 [2024-12-09 10:06:28.217570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.989 [2024-12-09 10:06:28.217598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.217975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.217988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.218017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.218033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.218052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.218063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.218080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.218090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:55:38.989 [2024-12-09 10:06:28.218107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.989 [2024-12-09 10:06:28.218119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.218970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.218983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.990 [2024-12-09 10:06:28.219706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:55:38.990 [2024-12-09 10:06:28.219727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.219740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.219761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.219774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.219796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.219807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.219829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.219841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.219863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.219876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.219900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.219912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.219934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.219952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.219975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.219987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:28.220531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.991 [2024-12-09 10:06:28.220596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.991 [2024-12-09 10:06:28.220631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.991 [2024-12-09 10:06:28.220663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.991 [2024-12-09 10:06:28.220696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:28.220869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.991 [2024-12-09 10:06:28.220884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:55:38.991 10795.21 IOPS, 42.17 MiB/s [2024-12-09T10:06:46.035Z] 10075.53 IOPS, 39.36 MiB/s [2024-12-09T10:06:46.035Z] 9445.81 IOPS, 36.90 MiB/s [2024-12-09T10:06:46.035Z] 8890.18 IOPS, 34.73 MiB/s [2024-12-09T10:06:46.035Z] 8482.28 IOPS, 33.13 MiB/s [2024-12-09T10:06:46.035Z] 8591.00 IOPS, 33.56 MiB/s [2024-12-09T10:06:46.035Z] 8687.35 IOPS, 33.93 MiB/s [2024-12-09T10:06:46.035Z] 8685.38 IOPS, 33.93 MiB/s [2024-12-09T10:06:46.035Z] 8684.23 IOPS, 33.92 MiB/s [2024-12-09T10:06:46.035Z] 8698.00 IOPS, 33.98 MiB/s [2024-12-09T10:06:46.035Z] 8776.33 IOPS, 34.28 MiB/s [2024-12-09T10:06:46.035Z] 8845.44 IOPS, 34.55 MiB/s [2024-12-09T10:06:46.035Z] 8901.15 IOPS, 34.77 MiB/s [2024-12-09T10:06:46.035Z] 8898.81 IOPS, 34.76 MiB/s [2024-12-09T10:06:46.035Z] 8892.64 IOPS, 34.74 MiB/s [2024-12-09T10:06:46.035Z] [2024-12-09 10:06:43.055045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:43.055113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:43.055171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:43.055212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:43.055227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:43.055236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:43.055250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:43.055259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:43.055272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:43.055281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:43.055304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:43.055314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:43.055345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:43.055354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:43.055369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:43.055379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:43.055394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.991 [2024-12-09 10:06:43.055404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:55:38.991 [2024-12-09 10:06:43.055418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.992 [2024-12-09 10:06:43.055427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.992 [2024-12-09 10:06:43.055451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.992 [2024-12-09 10:06:43.055476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.992 [2024-12-09 10:06:43.055509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.992 [2024-12-09 10:06:43.055534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.992 [2024-12-09 10:06:43.055567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.992 [2024-12-09 10:06:43.055814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.992 [2024-12-09 10:06:43.055842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.992 [2024-12-09 10:06:43.055868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:38.992 [2024-12-09 10:06:43.055892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.992 [2024-12-09 10:06:43.055916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.992 [2024-12-09 10:06:43.055939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.992 [2024-12-09 10:06:43.055963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:55:38.992 [2024-12-09 10:06:43.055977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:38.992 [2024-12-09 10:06:43.055987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:55:38.992 8903.93 IOPS, 34.78 MiB/s [2024-12-09T10:06:46.036Z] 8935.83 IOPS, 34.91 MiB/s [2024-12-09T10:06:46.036Z] 8965.03 IOPS, 35.02 MiB/s [2024-12-09T10:06:46.036Z] Received shutdown signal, test time was about 31.190165 seconds 00:55:38.992 00:55:38.992 Latency(us) 00:55:38.992 [2024-12-09T10:06:46.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:38.992 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:55:38.992 Verification LBA range: start 0x0 length 0x4000 00:55:38.992 Nvme0n1 : 31.19 8963.66 35.01 0.00 0.00 14259.91 134.15 4102725.31 00:55:38.992 [2024-12-09T10:06:46.036Z] =================================================================================================================== 00:55:38.992 [2024-12-09T10:06:46.036Z] Total : 8963.66 35.01 0.00 0.00 14259.91 134.15 4102725.31 00:55:38.992 10:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:39.256 rmmod nvme_tcp 00:55:39.256 rmmod nvme_fabrics 00:55:39.256 rmmod nvme_keyring 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 90631 ']' 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 90631 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 90631 ']' 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 90631 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90631 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:39.256 killing process with pid 90631 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90631' 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 90631 00:55:39.256 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 90631 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:55:39.515 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:55:39.775 00:55:39.775 real 0m37.337s 00:55:39.775 user 2m0.281s 00:55:39.775 sys 0m8.851s 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:55:39.775 ************************************ 00:55:39.775 END TEST nvmf_host_multipath_status 00:55:39.775 ************************************ 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:55:39.775 ************************************ 00:55:39.775 START TEST nvmf_discovery_remove_ifc 00:55:39.775 ************************************ 00:55:39.775 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:55:40.035 * Looking for test storage... 00:55:40.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:55:40.035 10:06:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:40.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:40.035 --rc genhtml_branch_coverage=1 00:55:40.035 --rc genhtml_function_coverage=1 00:55:40.035 --rc genhtml_legend=1 00:55:40.035 --rc geninfo_all_blocks=1 00:55:40.035 --rc geninfo_unexecuted_blocks=1 00:55:40.035 00:55:40.035 ' 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:40.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:40.035 --rc genhtml_branch_coverage=1 00:55:40.035 --rc genhtml_function_coverage=1 00:55:40.035 --rc genhtml_legend=1 00:55:40.035 --rc geninfo_all_blocks=1 00:55:40.035 --rc geninfo_unexecuted_blocks=1 00:55:40.035 00:55:40.035 ' 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:40.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:40.035 --rc genhtml_branch_coverage=1 00:55:40.035 --rc genhtml_function_coverage=1 00:55:40.035 --rc genhtml_legend=1 00:55:40.035 --rc geninfo_all_blocks=1 00:55:40.035 --rc geninfo_unexecuted_blocks=1 00:55:40.035 00:55:40.035 ' 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:40.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:40.035 --rc genhtml_branch_coverage=1 00:55:40.035 --rc genhtml_function_coverage=1 00:55:40.035 --rc genhtml_legend=1 00:55:40.035 --rc geninfo_all_blocks=1 00:55:40.035 --rc geninfo_unexecuted_blocks=1 00:55:40.035 00:55:40.035 ' 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:40.035 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:40.035 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:55:40.036 Cannot find device "nvmf_init_br" 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:55:40.036 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:55:40.296 Cannot find device "nvmf_init_br2" 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:55:40.296 Cannot find device "nvmf_tgt_br" 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:55:40.296 Cannot find device "nvmf_tgt_br2" 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:55:40.296 Cannot find device "nvmf_init_br" 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:55:40.296 Cannot find device "nvmf_init_br2" 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:55:40.296 Cannot find device "nvmf_tgt_br" 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:55:40.296 Cannot find device "nvmf_tgt_br2" 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:55:40.296 Cannot find device "nvmf_br" 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:55:40.296 Cannot find device "nvmf_init_if" 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:55:40.296 Cannot find device "nvmf_init_if2" 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:40.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:40.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:40.296 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:55:40.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:40.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:55:40.556 00:55:40.556 --- 10.0.0.3 ping statistics --- 00:55:40.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:40.556 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:55:40.556 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:55:40.556 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:55:40.556 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.124 ms 00:55:40.557 00:55:40.557 --- 10.0.0.4 ping statistics --- 00:55:40.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:40.557 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:40.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:40.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:55:40.557 00:55:40.557 --- 10.0.0.1 ping statistics --- 00:55:40.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:40.557 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:55:40.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:40.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:55:40.557 00:55:40.557 --- 10.0.0.2 ping statistics --- 00:55:40.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:40.557 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=92078 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 92078 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 92078 ']' 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:40.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:40.557 10:06:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:40.816 [2024-12-09 10:06:47.605403] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:55:40.816 [2024-12-09 10:06:47.605513] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:40.816 [2024-12-09 10:06:47.761328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:40.816 [2024-12-09 10:06:47.838487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:40.816 [2024-12-09 10:06:47.838548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:40.816 [2024-12-09 10:06:47.838555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:40.816 [2024-12-09 10:06:47.838560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:40.816 [2024-12-09 10:06:47.838565] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:40.816 [2024-12-09 10:06:47.838909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:41.755 [2024-12-09 10:06:48.581534] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:41.755 [2024-12-09 10:06:48.589690] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:55:41.755 null0 00:55:41.755 [2024-12-09 10:06:48.621566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=92127 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 92127 /tmp/host.sock 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 92127 ']' 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:41.755 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:41.755 10:06:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:41.755 [2024-12-09 10:06:48.702117] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:55:41.755 [2024-12-09 10:06:48.702205] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92127 ] 00:55:42.014 [2024-12-09 10:06:48.852587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:42.014 [2024-12-09 10:06:48.906437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:42.582 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:42.582 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:55:42.582 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:55:42.582 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:55:42.582 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:42.582 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:42.582 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:42.582 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:55:42.582 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:42.582 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:42.841 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:42.841 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:55:42.841 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:42.841 10:06:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:43.778 [2024-12-09 10:06:50.710461] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:55:43.778 [2024-12-09 10:06:50.710504] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:55:43.778 [2024-12-09 10:06:50.710518] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:55:43.778 [2024-12-09 10:06:50.797386] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:55:44.037 [2024-12-09 10:06:50.851698] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:55:44.037 [2024-12-09 10:06:50.852443] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2026110:1 started. 00:55:44.037 [2024-12-09 10:06:50.854046] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:55:44.037 [2024-12-09 10:06:50.854101] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:55:44.037 [2024-12-09 10:06:50.854125] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:55:44.037 [2024-12-09 10:06:50.854141] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:55:44.037 [2024-12-09 10:06:50.854163] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:55:44.037 [2024-12-09 10:06:50.858837] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2026110 was disconnected and freed. delete nvme_qpair. 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:55:44.037 10:06:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:55:44.979 10:06:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:55:44.979 10:06:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:44.979 10:06:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:44.979 10:06:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:55:44.979 10:06:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:55:44.979 10:06:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:55:44.979 10:06:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:45.262 10:06:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:45.262 10:06:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:55:45.262 10:06:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:55:46.199 10:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:55:46.199 10:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:46.199 10:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:55:46.199 10:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:55:46.199 10:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:55:46.199 10:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:46.199 10:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:46.199 10:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:46.199 10:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:55:46.199 10:06:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:55:47.149 10:06:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:55:47.149 10:06:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:47.149 10:06:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:55:47.149 10:06:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:47.149 10:06:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:47.149 10:06:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:55:47.149 10:06:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:55:47.149 10:06:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:47.149 10:06:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:55:47.149 10:06:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:55:48.522 10:06:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:55:48.522 10:06:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:48.522 10:06:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:55:48.522 10:06:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:48.522 10:06:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:48.522 10:06:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:55:48.522 10:06:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:55:48.522 10:06:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:48.522 10:06:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:55:48.522 10:06:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:55:49.457 10:06:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:55:49.457 10:06:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:49.457 10:06:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:49.457 10:06:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:55:49.457 10:06:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:49.457 10:06:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:55:49.457 10:06:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:55:49.457 10:06:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:49.457 [2024-12-09 10:06:56.281635] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:55:49.457 [2024-12-09 10:06:56.281689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:55:49.457 [2024-12-09 10:06:56.281700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:49.457 [2024-12-09 10:06:56.281710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:55:49.457 [2024-12-09 10:06:56.281716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:49.457 [2024-12-09 10:06:56.281722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:55:49.457 [2024-12-09 10:06:56.281727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:49.457 [2024-12-09 10:06:56.281733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:55:49.457 [2024-12-09 10:06:56.281739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:49.458 [2024-12-09 10:06:56.281745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:55:49.458 [2024-12-09 10:06:56.281750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:49.458 [2024-12-09 10:06:56.281756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68290 is same with the state(6) to be set 00:55:49.458 [2024-12-09 10:06:56.291613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f68290 (9): Bad file descriptor 00:55:49.458 10:06:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:55:49.458 10:06:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:55:49.458 [2024-12-09 10:06:56.301610] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:55:49.458 [2024-12-09 10:06:56.301627] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:55:49.458 [2024-12-09 10:06:56.301631] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:55:49.458 [2024-12-09 10:06:56.301635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:55:49.458 [2024-12-09 10:06:56.301666] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:55:50.392 10:06:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:55:50.392 10:06:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:50.392 10:06:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:50.392 10:06:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:55:50.392 10:06:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:55:50.392 10:06:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:50.392 10:06:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:55:50.392 [2024-12-09 10:06:57.366661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:55:50.392 [2024-12-09 10:06:57.366971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f68290 with addr=10.0.0.3, port=4420 00:55:50.392 [2024-12-09 10:06:57.367110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68290 is same with the state(6) to be set 00:55:50.392 [2024-12-09 10:06:57.367199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f68290 (9): Bad file descriptor 00:55:50.392 [2024-12-09 10:06:57.368537] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:55:50.392 [2024-12-09 10:06:57.368631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:55:50.392 [2024-12-09 10:06:57.368657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:55:50.392 [2024-12-09 10:06:57.368686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:55:50.392 [2024-12-09 10:06:57.368710] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:55:50.392 [2024-12-09 10:06:57.368725] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:55:50.392 [2024-12-09 10:06:57.368737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:55:50.392 [2024-12-09 10:06:57.368760] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:55:50.392 [2024-12-09 10:06:57.368774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:55:50.392 10:06:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:50.392 10:06:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:55:50.392 10:06:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:55:51.326 [2024-12-09 10:06:58.366937] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:55:51.326 [2024-12-09 10:06:58.367063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:55:51.326 [2024-12-09 10:06:58.367092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:55:51.326 [2024-12-09 10:06:58.367099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:55:51.326 [2024-12-09 10:06:58.367106] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:55:51.326 [2024-12-09 10:06:58.367112] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:55:51.326 [2024-12-09 10:06:58.367117] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:55:51.326 [2024-12-09 10:06:58.367121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:55:51.326 [2024-12-09 10:06:58.367172] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:55:51.326 [2024-12-09 10:06:58.367222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:55:51.326 [2024-12-09 10:06:58.367232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:51.326 [2024-12-09 10:06:58.367242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:55:51.326 [2024-12-09 10:06:58.367249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:51.326 [2024-12-09 10:06:58.367272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:55:51.326 [2024-12-09 10:06:58.367278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:51.326 [2024-12-09 10:06:58.367284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:55:51.326 [2024-12-09 10:06:58.367296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:51.326 [2024-12-09 10:06:58.367303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:55:51.326 [2024-12-09 10:06:58.367309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:51.326 [2024-12-09 10:06:58.367316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:55:51.326 [2024-12-09 10:06:58.367692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f94800 (9): Bad file descriptor 00:55:51.326 [2024-12-09 10:06:58.368695] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:55:51.326 [2024-12-09 10:06:58.368713] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:55:51.584 10:06:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:55:52.519 10:06:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:55:52.519 10:06:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:52.519 10:06:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:52.519 10:06:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:52.519 10:06:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:55:52.519 10:06:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:55:52.519 10:06:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:55:52.519 10:06:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:52.776 10:06:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:55:52.776 10:06:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:55:53.343 [2024-12-09 10:07:00.373099] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:55:53.343 [2024-12-09 10:07:00.373183] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:55:53.343 [2024-12-09 10:07:00.373201] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:55:53.601 [2024-12-09 10:07:00.459037] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:55:53.601 [2024-12-09 10:07:00.513344] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:55:53.601 [2024-12-09 10:07:00.513924] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1ffd2d0:1 started. 00:55:53.601 [2024-12-09 10:07:00.515080] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:55:53.601 [2024-12-09 10:07:00.515112] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:55:53.601 [2024-12-09 10:07:00.515130] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:55:53.601 [2024-12-09 10:07:00.515143] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:55:53.601 [2024-12-09 10:07:00.515149] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:55:53.601 [2024-12-09 10:07:00.521316] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1ffd2d0 was disconnected and freed. delete nvme_qpair. 00:55:53.601 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:55:53.601 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:55:53.601 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:53.601 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:55:53.601 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:55:53.601 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:53.601 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:53.601 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 92127 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 92127 ']' 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 92127 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92127 00:55:53.865 killing process with pid 92127 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92127' 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 92127 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 92127 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:53.865 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:55:54.133 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:54.133 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:55:54.134 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:54.134 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:54.134 rmmod nvme_tcp 00:55:54.134 rmmod nvme_fabrics 00:55:54.134 rmmod nvme_keyring 00:55:54.134 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:54.134 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:55:54.134 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:55:54.134 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 92078 ']' 00:55:54.134 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 92078 00:55:54.134 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 92078 ']' 00:55:54.134 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 92078 00:55:54.134 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:55:54.134 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:54.134 10:07:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92078 00:55:54.134 killing process with pid 92078 00:55:54.134 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:55:54.134 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:55:54.134 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92078' 00:55:54.134 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 92078 00:55:54.134 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 92078 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:55:54.392 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:55:54.650 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:55:54.650 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:55:54.650 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:54.650 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:54.651 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:55:54.651 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:54.651 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:54.651 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:54.651 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:55:54.651 00:55:54.651 real 0m14.839s 00:55:54.651 user 0m25.607s 00:55:54.651 sys 0m1.978s 00:55:54.651 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:54.651 10:07:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:55:54.651 ************************************ 00:55:54.651 END TEST nvmf_discovery_remove_ifc 00:55:54.651 ************************************ 00:55:54.651 10:07:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:55:54.651 10:07:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:55:54.651 10:07:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:54.651 10:07:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:55:54.651 ************************************ 00:55:54.651 START TEST nvmf_identify_kernel_target 00:55:54.651 ************************************ 00:55:54.651 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:55:54.909 * Looking for test storage... 00:55:54.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:54.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:54.909 --rc genhtml_branch_coverage=1 00:55:54.909 --rc genhtml_function_coverage=1 00:55:54.909 --rc genhtml_legend=1 00:55:54.909 --rc geninfo_all_blocks=1 00:55:54.909 --rc geninfo_unexecuted_blocks=1 00:55:54.909 00:55:54.909 ' 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:54.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:54.909 --rc genhtml_branch_coverage=1 00:55:54.909 --rc genhtml_function_coverage=1 00:55:54.909 --rc genhtml_legend=1 00:55:54.909 --rc geninfo_all_blocks=1 00:55:54.909 --rc geninfo_unexecuted_blocks=1 00:55:54.909 00:55:54.909 ' 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:54.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:54.909 --rc genhtml_branch_coverage=1 00:55:54.909 --rc genhtml_function_coverage=1 00:55:54.909 --rc genhtml_legend=1 00:55:54.909 --rc geninfo_all_blocks=1 00:55:54.909 --rc geninfo_unexecuted_blocks=1 00:55:54.909 00:55:54.909 ' 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:54.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:54.909 --rc genhtml_branch_coverage=1 00:55:54.909 --rc genhtml_function_coverage=1 00:55:54.909 --rc genhtml_legend=1 00:55:54.909 --rc geninfo_all_blocks=1 00:55:54.909 --rc geninfo_unexecuted_blocks=1 00:55:54.909 00:55:54.909 ' 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:54.909 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:54.910 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:55.168 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:55.168 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:55:55.169 Cannot find device "nvmf_init_br" 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:55:55.169 10:07:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:55:55.169 Cannot find device "nvmf_init_br2" 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:55:55.169 Cannot find device "nvmf_tgt_br" 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:55:55.169 Cannot find device "nvmf_tgt_br2" 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:55:55.169 Cannot find device "nvmf_init_br" 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:55:55.169 Cannot find device "nvmf_init_br2" 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:55:55.169 Cannot find device "nvmf_tgt_br" 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:55:55.169 Cannot find device "nvmf_tgt_br2" 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:55:55.169 Cannot find device "nvmf_br" 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:55:55.169 Cannot find device "nvmf_init_if" 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:55:55.169 Cannot find device "nvmf_init_if2" 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:55.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:55.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:55.169 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:55.427 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:55.427 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:55.427 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:55.427 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:55:55.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:55.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.128 ms 00:55:55.428 00:55:55.428 --- 10.0.0.3 ping statistics --- 00:55:55.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:55.428 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:55:55.428 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:55:55.428 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:55:55.428 00:55:55.428 --- 10.0.0.4 ping statistics --- 00:55:55.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:55.428 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:55.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:55.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:55:55.428 00:55:55.428 --- 10.0.0.1 ping statistics --- 00:55:55.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:55.428 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:55:55.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:55.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:55:55.428 00:55:55.428 --- 10.0.0.2 ping statistics --- 00:55:55.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:55.428 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:55:55.428 10:07:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:55:55.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:55.994 Waiting for block devices as requested 00:55:55.994 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:55:56.253 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:55:56.253 No valid GPT data, bailing 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:55:56.253 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:55:56.512 No valid GPT data, bailing 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:55:56.512 No valid GPT data, bailing 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:55:56.512 No valid GPT data, bailing 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:55:56.512 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -a 10.0.0.1 -t tcp -s 4420 00:55:56.771 00:55:56.771 Discovery Log Number of Records 2, Generation counter 2 00:55:56.771 =====Discovery Log Entry 0====== 00:55:56.771 trtype: tcp 00:55:56.771 adrfam: ipv4 00:55:56.771 subtype: current discovery subsystem 00:55:56.771 treq: not specified, sq flow control disable supported 00:55:56.771 portid: 1 00:55:56.771 trsvcid: 4420 00:55:56.771 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:55:56.771 traddr: 10.0.0.1 00:55:56.771 eflags: none 00:55:56.771 sectype: none 00:55:56.771 =====Discovery Log Entry 1====== 00:55:56.771 trtype: tcp 00:55:56.771 adrfam: ipv4 00:55:56.771 subtype: nvme subsystem 00:55:56.771 treq: not specified, sq flow control disable supported 00:55:56.771 portid: 1 00:55:56.771 trsvcid: 4420 00:55:56.771 subnqn: nqn.2016-06.io.spdk:testnqn 00:55:56.771 traddr: 10.0.0.1 00:55:56.771 eflags: none 00:55:56.771 sectype: none 00:55:56.771 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:55:56.771 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:55:56.771 ===================================================== 00:55:56.771 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:55:56.771 ===================================================== 00:55:56.771 Controller Capabilities/Features 00:55:56.771 ================================ 00:55:56.771 Vendor ID: 0000 00:55:56.771 Subsystem Vendor ID: 0000 00:55:56.771 Serial Number: 9e28ffce8a54c61857bf 00:55:56.771 Model Number: Linux 00:55:56.771 Firmware Version: 6.8.9-20 00:55:56.771 Recommended Arb Burst: 0 00:55:56.771 IEEE OUI Identifier: 00 00 00 00:55:56.771 Multi-path I/O 00:55:56.771 May have multiple subsystem ports: No 00:55:56.771 May have multiple controllers: No 00:55:56.771 Associated with SR-IOV VF: No 00:55:56.771 Max Data Transfer Size: Unlimited 00:55:56.771 Max Number of Namespaces: 0 00:55:56.771 Max Number of I/O Queues: 1024 00:55:56.771 NVMe Specification Version (VS): 1.3 00:55:56.771 NVMe Specification Version (Identify): 1.3 00:55:56.771 Maximum Queue Entries: 1024 00:55:56.771 Contiguous Queues Required: No 00:55:56.771 Arbitration Mechanisms Supported 00:55:56.771 Weighted Round Robin: Not Supported 00:55:56.771 Vendor Specific: Not Supported 00:55:56.771 Reset Timeout: 7500 ms 00:55:56.771 Doorbell Stride: 4 bytes 00:55:56.771 NVM Subsystem Reset: Not Supported 00:55:56.771 Command Sets Supported 00:55:56.771 NVM Command Set: Supported 00:55:56.771 Boot Partition: Not Supported 00:55:56.771 Memory Page Size Minimum: 4096 bytes 00:55:56.771 Memory Page Size Maximum: 4096 bytes 00:55:56.771 Persistent Memory Region: Not Supported 00:55:56.771 Optional Asynchronous Events Supported 00:55:56.771 Namespace Attribute Notices: Not Supported 00:55:56.771 Firmware Activation Notices: Not Supported 00:55:56.771 ANA Change Notices: Not Supported 00:55:56.771 PLE Aggregate Log Change Notices: Not Supported 00:55:56.771 LBA Status Info Alert Notices: Not Supported 00:55:56.771 EGE Aggregate Log Change Notices: Not Supported 00:55:56.771 Normal NVM Subsystem Shutdown event: Not Supported 00:55:56.771 Zone Descriptor Change Notices: Not Supported 00:55:56.771 Discovery Log Change Notices: Supported 00:55:56.771 Controller Attributes 00:55:56.771 128-bit Host Identifier: Not Supported 00:55:56.771 Non-Operational Permissive Mode: Not Supported 00:55:56.771 NVM Sets: Not Supported 00:55:56.771 Read Recovery Levels: Not Supported 00:55:56.771 Endurance Groups: Not Supported 00:55:56.771 Predictable Latency Mode: Not Supported 00:55:56.771 Traffic Based Keep ALive: Not Supported 00:55:56.771 Namespace Granularity: Not Supported 00:55:56.771 SQ Associations: Not Supported 00:55:56.771 UUID List: Not Supported 00:55:56.771 Multi-Domain Subsystem: Not Supported 00:55:56.771 Fixed Capacity Management: Not Supported 00:55:56.771 Variable Capacity Management: Not Supported 00:55:56.771 Delete Endurance Group: Not Supported 00:55:56.771 Delete NVM Set: Not Supported 00:55:56.771 Extended LBA Formats Supported: Not Supported 00:55:56.771 Flexible Data Placement Supported: Not Supported 00:55:56.771 00:55:56.771 Controller Memory Buffer Support 00:55:56.771 ================================ 00:55:56.771 Supported: No 00:55:56.771 00:55:56.771 Persistent Memory Region Support 00:55:56.771 ================================ 00:55:56.771 Supported: No 00:55:56.771 00:55:56.771 Admin Command Set Attributes 00:55:56.771 ============================ 00:55:56.771 Security Send/Receive: Not Supported 00:55:56.771 Format NVM: Not Supported 00:55:56.771 Firmware Activate/Download: Not Supported 00:55:56.771 Namespace Management: Not Supported 00:55:56.771 Device Self-Test: Not Supported 00:55:56.771 Directives: Not Supported 00:55:56.771 NVMe-MI: Not Supported 00:55:56.771 Virtualization Management: Not Supported 00:55:56.771 Doorbell Buffer Config: Not Supported 00:55:56.771 Get LBA Status Capability: Not Supported 00:55:56.771 Command & Feature Lockdown Capability: Not Supported 00:55:56.771 Abort Command Limit: 1 00:55:56.771 Async Event Request Limit: 1 00:55:56.771 Number of Firmware Slots: N/A 00:55:56.771 Firmware Slot 1 Read-Only: N/A 00:55:56.771 Firmware Activation Without Reset: N/A 00:55:56.771 Multiple Update Detection Support: N/A 00:55:56.771 Firmware Update Granularity: No Information Provided 00:55:56.771 Per-Namespace SMART Log: No 00:55:56.771 Asymmetric Namespace Access Log Page: Not Supported 00:55:56.771 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:55:56.771 Command Effects Log Page: Not Supported 00:55:56.771 Get Log Page Extended Data: Supported 00:55:56.771 Telemetry Log Pages: Not Supported 00:55:56.772 Persistent Event Log Pages: Not Supported 00:55:56.772 Supported Log Pages Log Page: May Support 00:55:56.772 Commands Supported & Effects Log Page: Not Supported 00:55:56.772 Feature Identifiers & Effects Log Page:May Support 00:55:56.772 NVMe-MI Commands & Effects Log Page: May Support 00:55:56.772 Data Area 4 for Telemetry Log: Not Supported 00:55:56.772 Error Log Page Entries Supported: 1 00:55:56.772 Keep Alive: Not Supported 00:55:56.772 00:55:56.772 NVM Command Set Attributes 00:55:56.772 ========================== 00:55:56.772 Submission Queue Entry Size 00:55:56.772 Max: 1 00:55:56.772 Min: 1 00:55:56.772 Completion Queue Entry Size 00:55:56.772 Max: 1 00:55:56.772 Min: 1 00:55:56.772 Number of Namespaces: 0 00:55:56.772 Compare Command: Not Supported 00:55:56.772 Write Uncorrectable Command: Not Supported 00:55:56.772 Dataset Management Command: Not Supported 00:55:56.772 Write Zeroes Command: Not Supported 00:55:56.772 Set Features Save Field: Not Supported 00:55:56.772 Reservations: Not Supported 00:55:56.772 Timestamp: Not Supported 00:55:56.772 Copy: Not Supported 00:55:56.772 Volatile Write Cache: Not Present 00:55:56.772 Atomic Write Unit (Normal): 1 00:55:56.772 Atomic Write Unit (PFail): 1 00:55:56.772 Atomic Compare & Write Unit: 1 00:55:56.772 Fused Compare & Write: Not Supported 00:55:56.772 Scatter-Gather List 00:55:56.772 SGL Command Set: Supported 00:55:56.772 SGL Keyed: Not Supported 00:55:56.772 SGL Bit Bucket Descriptor: Not Supported 00:55:56.772 SGL Metadata Pointer: Not Supported 00:55:56.772 Oversized SGL: Not Supported 00:55:56.772 SGL Metadata Address: Not Supported 00:55:56.772 SGL Offset: Supported 00:55:56.772 Transport SGL Data Block: Not Supported 00:55:56.772 Replay Protected Memory Block: Not Supported 00:55:56.772 00:55:56.772 Firmware Slot Information 00:55:56.772 ========================= 00:55:56.772 Active slot: 0 00:55:56.772 00:55:56.772 00:55:56.772 Error Log 00:55:56.772 ========= 00:55:56.772 00:55:56.772 Active Namespaces 00:55:56.772 ================= 00:55:56.772 Discovery Log Page 00:55:56.772 ================== 00:55:56.772 Generation Counter: 2 00:55:56.772 Number of Records: 2 00:55:56.772 Record Format: 0 00:55:56.772 00:55:56.772 Discovery Log Entry 0 00:55:56.772 ---------------------- 00:55:56.772 Transport Type: 3 (TCP) 00:55:56.772 Address Family: 1 (IPv4) 00:55:56.772 Subsystem Type: 3 (Current Discovery Subsystem) 00:55:56.772 Entry Flags: 00:55:56.772 Duplicate Returned Information: 0 00:55:56.772 Explicit Persistent Connection Support for Discovery: 0 00:55:56.772 Transport Requirements: 00:55:56.772 Secure Channel: Not Specified 00:55:56.772 Port ID: 1 (0x0001) 00:55:56.772 Controller ID: 65535 (0xffff) 00:55:56.772 Admin Max SQ Size: 32 00:55:56.772 Transport Service Identifier: 4420 00:55:56.772 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:55:56.772 Transport Address: 10.0.0.1 00:55:56.772 Discovery Log Entry 1 00:55:56.772 ---------------------- 00:55:56.772 Transport Type: 3 (TCP) 00:55:56.772 Address Family: 1 (IPv4) 00:55:56.772 Subsystem Type: 2 (NVM Subsystem) 00:55:56.772 Entry Flags: 00:55:56.772 Duplicate Returned Information: 0 00:55:56.772 Explicit Persistent Connection Support for Discovery: 0 00:55:56.772 Transport Requirements: 00:55:56.772 Secure Channel: Not Specified 00:55:56.772 Port ID: 1 (0x0001) 00:55:56.772 Controller ID: 65535 (0xffff) 00:55:56.772 Admin Max SQ Size: 32 00:55:56.772 Transport Service Identifier: 4420 00:55:56.772 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:55:56.772 Transport Address: 10.0.0.1 00:55:56.772 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:57.031 get_feature(0x01) failed 00:55:57.031 get_feature(0x02) failed 00:55:57.031 get_feature(0x04) failed 00:55:57.031 ===================================================== 00:55:57.031 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:55:57.031 ===================================================== 00:55:57.031 Controller Capabilities/Features 00:55:57.031 ================================ 00:55:57.031 Vendor ID: 0000 00:55:57.031 Subsystem Vendor ID: 0000 00:55:57.031 Serial Number: 871dd4d152f0d00a3eac 00:55:57.031 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:55:57.031 Firmware Version: 6.8.9-20 00:55:57.031 Recommended Arb Burst: 6 00:55:57.031 IEEE OUI Identifier: 00 00 00 00:55:57.031 Multi-path I/O 00:55:57.031 May have multiple subsystem ports: Yes 00:55:57.031 May have multiple controllers: Yes 00:55:57.031 Associated with SR-IOV VF: No 00:55:57.031 Max Data Transfer Size: Unlimited 00:55:57.031 Max Number of Namespaces: 1024 00:55:57.031 Max Number of I/O Queues: 128 00:55:57.031 NVMe Specification Version (VS): 1.3 00:55:57.031 NVMe Specification Version (Identify): 1.3 00:55:57.031 Maximum Queue Entries: 1024 00:55:57.031 Contiguous Queues Required: No 00:55:57.031 Arbitration Mechanisms Supported 00:55:57.031 Weighted Round Robin: Not Supported 00:55:57.031 Vendor Specific: Not Supported 00:55:57.031 Reset Timeout: 7500 ms 00:55:57.031 Doorbell Stride: 4 bytes 00:55:57.031 NVM Subsystem Reset: Not Supported 00:55:57.031 Command Sets Supported 00:55:57.031 NVM Command Set: Supported 00:55:57.031 Boot Partition: Not Supported 00:55:57.031 Memory Page Size Minimum: 4096 bytes 00:55:57.031 Memory Page Size Maximum: 4096 bytes 00:55:57.031 Persistent Memory Region: Not Supported 00:55:57.031 Optional Asynchronous Events Supported 00:55:57.031 Namespace Attribute Notices: Supported 00:55:57.031 Firmware Activation Notices: Not Supported 00:55:57.031 ANA Change Notices: Supported 00:55:57.031 PLE Aggregate Log Change Notices: Not Supported 00:55:57.031 LBA Status Info Alert Notices: Not Supported 00:55:57.031 EGE Aggregate Log Change Notices: Not Supported 00:55:57.031 Normal NVM Subsystem Shutdown event: Not Supported 00:55:57.032 Zone Descriptor Change Notices: Not Supported 00:55:57.032 Discovery Log Change Notices: Not Supported 00:55:57.032 Controller Attributes 00:55:57.032 128-bit Host Identifier: Supported 00:55:57.032 Non-Operational Permissive Mode: Not Supported 00:55:57.032 NVM Sets: Not Supported 00:55:57.032 Read Recovery Levels: Not Supported 00:55:57.032 Endurance Groups: Not Supported 00:55:57.032 Predictable Latency Mode: Not Supported 00:55:57.032 Traffic Based Keep ALive: Supported 00:55:57.032 Namespace Granularity: Not Supported 00:55:57.032 SQ Associations: Not Supported 00:55:57.032 UUID List: Not Supported 00:55:57.032 Multi-Domain Subsystem: Not Supported 00:55:57.032 Fixed Capacity Management: Not Supported 00:55:57.032 Variable Capacity Management: Not Supported 00:55:57.032 Delete Endurance Group: Not Supported 00:55:57.032 Delete NVM Set: Not Supported 00:55:57.032 Extended LBA Formats Supported: Not Supported 00:55:57.032 Flexible Data Placement Supported: Not Supported 00:55:57.032 00:55:57.032 Controller Memory Buffer Support 00:55:57.032 ================================ 00:55:57.032 Supported: No 00:55:57.032 00:55:57.032 Persistent Memory Region Support 00:55:57.032 ================================ 00:55:57.032 Supported: No 00:55:57.032 00:55:57.032 Admin Command Set Attributes 00:55:57.032 ============================ 00:55:57.032 Security Send/Receive: Not Supported 00:55:57.032 Format NVM: Not Supported 00:55:57.032 Firmware Activate/Download: Not Supported 00:55:57.032 Namespace Management: Not Supported 00:55:57.032 Device Self-Test: Not Supported 00:55:57.032 Directives: Not Supported 00:55:57.032 NVMe-MI: Not Supported 00:55:57.032 Virtualization Management: Not Supported 00:55:57.032 Doorbell Buffer Config: Not Supported 00:55:57.032 Get LBA Status Capability: Not Supported 00:55:57.032 Command & Feature Lockdown Capability: Not Supported 00:55:57.032 Abort Command Limit: 4 00:55:57.032 Async Event Request Limit: 4 00:55:57.032 Number of Firmware Slots: N/A 00:55:57.032 Firmware Slot 1 Read-Only: N/A 00:55:57.032 Firmware Activation Without Reset: N/A 00:55:57.032 Multiple Update Detection Support: N/A 00:55:57.032 Firmware Update Granularity: No Information Provided 00:55:57.032 Per-Namespace SMART Log: Yes 00:55:57.032 Asymmetric Namespace Access Log Page: Supported 00:55:57.032 ANA Transition Time : 10 sec 00:55:57.032 00:55:57.032 Asymmetric Namespace Access Capabilities 00:55:57.032 ANA Optimized State : Supported 00:55:57.032 ANA Non-Optimized State : Supported 00:55:57.032 ANA Inaccessible State : Supported 00:55:57.032 ANA Persistent Loss State : Supported 00:55:57.032 ANA Change State : Supported 00:55:57.032 ANAGRPID is not changed : No 00:55:57.032 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:55:57.032 00:55:57.032 ANA Group Identifier Maximum : 128 00:55:57.032 Number of ANA Group Identifiers : 128 00:55:57.032 Max Number of Allowed Namespaces : 1024 00:55:57.032 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:55:57.032 Command Effects Log Page: Supported 00:55:57.032 Get Log Page Extended Data: Supported 00:55:57.032 Telemetry Log Pages: Not Supported 00:55:57.032 Persistent Event Log Pages: Not Supported 00:55:57.032 Supported Log Pages Log Page: May Support 00:55:57.032 Commands Supported & Effects Log Page: Not Supported 00:55:57.032 Feature Identifiers & Effects Log Page:May Support 00:55:57.032 NVMe-MI Commands & Effects Log Page: May Support 00:55:57.032 Data Area 4 for Telemetry Log: Not Supported 00:55:57.032 Error Log Page Entries Supported: 128 00:55:57.032 Keep Alive: Supported 00:55:57.032 Keep Alive Granularity: 1000 ms 00:55:57.032 00:55:57.032 NVM Command Set Attributes 00:55:57.032 ========================== 00:55:57.032 Submission Queue Entry Size 00:55:57.032 Max: 64 00:55:57.032 Min: 64 00:55:57.032 Completion Queue Entry Size 00:55:57.032 Max: 16 00:55:57.032 Min: 16 00:55:57.032 Number of Namespaces: 1024 00:55:57.032 Compare Command: Not Supported 00:55:57.032 Write Uncorrectable Command: Not Supported 00:55:57.032 Dataset Management Command: Supported 00:55:57.032 Write Zeroes Command: Supported 00:55:57.032 Set Features Save Field: Not Supported 00:55:57.032 Reservations: Not Supported 00:55:57.032 Timestamp: Not Supported 00:55:57.032 Copy: Not Supported 00:55:57.032 Volatile Write Cache: Present 00:55:57.032 Atomic Write Unit (Normal): 1 00:55:57.032 Atomic Write Unit (PFail): 1 00:55:57.032 Atomic Compare & Write Unit: 1 00:55:57.032 Fused Compare & Write: Not Supported 00:55:57.032 Scatter-Gather List 00:55:57.032 SGL Command Set: Supported 00:55:57.032 SGL Keyed: Not Supported 00:55:57.032 SGL Bit Bucket Descriptor: Not Supported 00:55:57.032 SGL Metadata Pointer: Not Supported 00:55:57.032 Oversized SGL: Not Supported 00:55:57.032 SGL Metadata Address: Not Supported 00:55:57.032 SGL Offset: Supported 00:55:57.032 Transport SGL Data Block: Not Supported 00:55:57.032 Replay Protected Memory Block: Not Supported 00:55:57.032 00:55:57.032 Firmware Slot Information 00:55:57.032 ========================= 00:55:57.032 Active slot: 0 00:55:57.032 00:55:57.032 Asymmetric Namespace Access 00:55:57.032 =========================== 00:55:57.032 Change Count : 0 00:55:57.032 Number of ANA Group Descriptors : 1 00:55:57.032 ANA Group Descriptor : 0 00:55:57.032 ANA Group ID : 1 00:55:57.032 Number of NSID Values : 1 00:55:57.032 Change Count : 0 00:55:57.032 ANA State : 1 00:55:57.032 Namespace Identifier : 1 00:55:57.032 00:55:57.032 Commands Supported and Effects 00:55:57.032 ============================== 00:55:57.032 Admin Commands 00:55:57.032 -------------- 00:55:57.032 Get Log Page (02h): Supported 00:55:57.032 Identify (06h): Supported 00:55:57.032 Abort (08h): Supported 00:55:57.032 Set Features (09h): Supported 00:55:57.032 Get Features (0Ah): Supported 00:55:57.032 Asynchronous Event Request (0Ch): Supported 00:55:57.032 Keep Alive (18h): Supported 00:55:57.032 I/O Commands 00:55:57.032 ------------ 00:55:57.032 Flush (00h): Supported 00:55:57.032 Write (01h): Supported LBA-Change 00:55:57.032 Read (02h): Supported 00:55:57.032 Write Zeroes (08h): Supported LBA-Change 00:55:57.032 Dataset Management (09h): Supported 00:55:57.032 00:55:57.032 Error Log 00:55:57.032 ========= 00:55:57.032 Entry: 0 00:55:57.032 Error Count: 0x3 00:55:57.032 Submission Queue Id: 0x0 00:55:57.032 Command Id: 0x5 00:55:57.032 Phase Bit: 0 00:55:57.032 Status Code: 0x2 00:55:57.032 Status Code Type: 0x0 00:55:57.032 Do Not Retry: 1 00:55:57.032 Error Location: 0x28 00:55:57.032 LBA: 0x0 00:55:57.032 Namespace: 0x0 00:55:57.032 Vendor Log Page: 0x0 00:55:57.032 ----------- 00:55:57.032 Entry: 1 00:55:57.032 Error Count: 0x2 00:55:57.032 Submission Queue Id: 0x0 00:55:57.032 Command Id: 0x5 00:55:57.032 Phase Bit: 0 00:55:57.032 Status Code: 0x2 00:55:57.032 Status Code Type: 0x0 00:55:57.032 Do Not Retry: 1 00:55:57.032 Error Location: 0x28 00:55:57.032 LBA: 0x0 00:55:57.032 Namespace: 0x0 00:55:57.032 Vendor Log Page: 0x0 00:55:57.032 ----------- 00:55:57.032 Entry: 2 00:55:57.032 Error Count: 0x1 00:55:57.032 Submission Queue Id: 0x0 00:55:57.032 Command Id: 0x4 00:55:57.032 Phase Bit: 0 00:55:57.032 Status Code: 0x2 00:55:57.032 Status Code Type: 0x0 00:55:57.032 Do Not Retry: 1 00:55:57.032 Error Location: 0x28 00:55:57.032 LBA: 0x0 00:55:57.032 Namespace: 0x0 00:55:57.032 Vendor Log Page: 0x0 00:55:57.032 00:55:57.032 Number of Queues 00:55:57.032 ================ 00:55:57.032 Number of I/O Submission Queues: 128 00:55:57.032 Number of I/O Completion Queues: 128 00:55:57.032 00:55:57.032 ZNS Specific Controller Data 00:55:57.032 ============================ 00:55:57.032 Zone Append Size Limit: 0 00:55:57.032 00:55:57.032 00:55:57.032 Active Namespaces 00:55:57.032 ================= 00:55:57.032 get_feature(0x05) failed 00:55:57.032 Namespace ID:1 00:55:57.032 Command Set Identifier: NVM (00h) 00:55:57.032 Deallocate: Supported 00:55:57.032 Deallocated/Unwritten Error: Not Supported 00:55:57.032 Deallocated Read Value: Unknown 00:55:57.032 Deallocate in Write Zeroes: Not Supported 00:55:57.032 Deallocated Guard Field: 0xFFFF 00:55:57.032 Flush: Supported 00:55:57.032 Reservation: Not Supported 00:55:57.032 Namespace Sharing Capabilities: Multiple Controllers 00:55:57.032 Size (in LBAs): 1310720 (5GiB) 00:55:57.032 Capacity (in LBAs): 1310720 (5GiB) 00:55:57.032 Utilization (in LBAs): 1310720 (5GiB) 00:55:57.032 UUID: 4746da7d-ee6b-4b0d-8ef2-8dd24d3a6ad2 00:55:57.032 Thin Provisioning: Not Supported 00:55:57.033 Per-NS Atomic Units: Yes 00:55:57.033 Atomic Boundary Size (Normal): 0 00:55:57.033 Atomic Boundary Size (PFail): 0 00:55:57.033 Atomic Boundary Offset: 0 00:55:57.033 NGUID/EUI64 Never Reused: No 00:55:57.033 ANA group ID: 1 00:55:57.033 Namespace Write Protected: No 00:55:57.033 Number of LBA Formats: 1 00:55:57.033 Current LBA Format: LBA Format #00 00:55:57.033 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:55:57.033 00:55:57.033 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:55:57.033 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:57.033 10:07:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:57.033 rmmod nvme_tcp 00:55:57.033 rmmod nvme_fabrics 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:55:57.033 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:55:57.291 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:57.292 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:57.550 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:55:57.550 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:55:57.550 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:55:57.550 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:55:57.550 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:55:57.550 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:55:57.550 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:55:57.550 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:55:57.550 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:55:57.550 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:55:57.550 10:07:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:55:58.486 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:58.486 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:55:58.486 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:55:58.486 00:55:58.486 real 0m3.762s 00:55:58.486 user 0m1.285s 00:55:58.486 sys 0m1.907s 00:55:58.486 10:07:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:58.486 10:07:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:55:58.486 ************************************ 00:55:58.486 END TEST nvmf_identify_kernel_target 00:55:58.486 ************************************ 00:55:58.486 10:07:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:55:58.486 10:07:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:55:58.486 10:07:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:58.486 10:07:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:55:58.486 ************************************ 00:55:58.486 START TEST nvmf_auth_host 00:55:58.486 ************************************ 00:55:58.486 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:55:58.744 * Looking for test storage... 00:55:58.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:55:58.744 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:58.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:58.745 --rc genhtml_branch_coverage=1 00:55:58.745 --rc genhtml_function_coverage=1 00:55:58.745 --rc genhtml_legend=1 00:55:58.745 --rc geninfo_all_blocks=1 00:55:58.745 --rc geninfo_unexecuted_blocks=1 00:55:58.745 00:55:58.745 ' 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:58.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:58.745 --rc genhtml_branch_coverage=1 00:55:58.745 --rc genhtml_function_coverage=1 00:55:58.745 --rc genhtml_legend=1 00:55:58.745 --rc geninfo_all_blocks=1 00:55:58.745 --rc geninfo_unexecuted_blocks=1 00:55:58.745 00:55:58.745 ' 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:58.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:58.745 --rc genhtml_branch_coverage=1 00:55:58.745 --rc genhtml_function_coverage=1 00:55:58.745 --rc genhtml_legend=1 00:55:58.745 --rc geninfo_all_blocks=1 00:55:58.745 --rc geninfo_unexecuted_blocks=1 00:55:58.745 00:55:58.745 ' 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:58.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:58.745 --rc genhtml_branch_coverage=1 00:55:58.745 --rc genhtml_function_coverage=1 00:55:58.745 --rc genhtml_legend=1 00:55:58.745 --rc geninfo_all_blocks=1 00:55:58.745 --rc geninfo_unexecuted_blocks=1 00:55:58.745 00:55:58.745 ' 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:58.745 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:58.745 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:55:59.004 Cannot find device "nvmf_init_br" 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:55:59.004 Cannot find device "nvmf_init_br2" 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:55:59.004 Cannot find device "nvmf_tgt_br" 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:55:59.004 Cannot find device "nvmf_tgt_br2" 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:55:59.004 Cannot find device "nvmf_init_br" 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:55:59.004 Cannot find device "nvmf_init_br2" 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:55:59.004 Cannot find device "nvmf_tgt_br" 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:55:59.004 Cannot find device "nvmf_tgt_br2" 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:55:59.004 Cannot find device "nvmf_br" 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:55:59.004 Cannot find device "nvmf_init_if" 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:55:59.004 Cannot find device "nvmf_init_if2" 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:59.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:59.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:59.004 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:55:59.004 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:59.004 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:59.004 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:55:59.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:59.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:55:59.263 00:55:59.263 --- 10.0.0.3 ping statistics --- 00:55:59.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:59.263 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:55:59.263 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:55:59.263 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:55:59.263 00:55:59.263 --- 10.0.0.4 ping statistics --- 00:55:59.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:59.263 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:59.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:59.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:55:59.263 00:55:59.263 --- 10.0.0.1 ping statistics --- 00:55:59.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:59.263 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:55:59.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:59.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:55:59.263 00:55:59.263 --- 10.0.0.2 ping statistics --- 00:55:59.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:59.263 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=93140 00:55:59.263 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:55:59.264 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 93140 00:55:59.264 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 93140 ']' 00:55:59.264 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:59.264 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:59.264 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:59.264 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:59.264 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:00.199 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:00.199 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:56:00.199 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:00.199 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:00.199 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=888688c2453b5dc694138e2a67ed3e52 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DDC 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 888688c2453b5dc694138e2a67ed3e52 0 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 888688c2453b5dc694138e2a67ed3e52 0 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=888688c2453b5dc694138e2a67ed3e52 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DDC 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DDC 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.DDC 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ed47263777ca248c378202b00d314884ef35050890a008b7054036956f8526a4 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PaL 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ed47263777ca248c378202b00d314884ef35050890a008b7054036956f8526a4 3 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ed47263777ca248c378202b00d314884ef35050890a008b7054036956f8526a4 3 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ed47263777ca248c378202b00d314884ef35050890a008b7054036956f8526a4 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PaL 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PaL 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.PaL 00:56:00.458 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7236373c0991a82454b3c0b3677377d69b90e9378a57a16f 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.SBl 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7236373c0991a82454b3c0b3677377d69b90e9378a57a16f 0 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7236373c0991a82454b3c0b3677377d69b90e9378a57a16f 0 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7236373c0991a82454b3c0b3677377d69b90e9378a57a16f 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.SBl 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.SBl 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.SBl 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=89a1f484e1a21b950a19ddcd65cf50629440332962f92919 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.CfU 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 89a1f484e1a21b950a19ddcd65cf50629440332962f92919 2 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 89a1f484e1a21b950a19ddcd65cf50629440332962f92919 2 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=89a1f484e1a21b950a19ddcd65cf50629440332962f92919 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:56:00.459 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:56:00.717 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.CfU 00:56:00.717 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.CfU 00:56:00.717 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.CfU 00:56:00.717 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:56:00.717 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0e6c5e3a7f878d545ec3d2b80f03fe1b 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.DEf 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0e6c5e3a7f878d545ec3d2b80f03fe1b 1 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0e6c5e3a7f878d545ec3d2b80f03fe1b 1 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0e6c5e3a7f878d545ec3d2b80f03fe1b 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.DEf 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.DEf 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.DEf 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b55ad66440bffecd20b624eecdec2585 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RP0 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b55ad66440bffecd20b624eecdec2585 1 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b55ad66440bffecd20b624eecdec2585 1 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b55ad66440bffecd20b624eecdec2585 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RP0 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RP0 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.RP0 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0ffb3126011a2debe2f55601dfa073b2f38e0f1e17c8b450 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.IXq 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0ffb3126011a2debe2f55601dfa073b2f38e0f1e17c8b450 2 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0ffb3126011a2debe2f55601dfa073b2f38e0f1e17c8b450 2 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0ffb3126011a2debe2f55601dfa073b2f38e0f1e17c8b450 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.IXq 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.IXq 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.IXq 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=46d4da611cf68aedf61b47d0afb2dcb7 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.LFb 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 46d4da611cf68aedf61b47d0afb2dcb7 0 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 46d4da611cf68aedf61b47d0afb2dcb7 0 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=46d4da611cf68aedf61b47d0afb2dcb7 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:56:00.718 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.LFb 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.LFb 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.LFb 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=03bb179a0e5bfcd107b7921e935a1224c025eead636c923541c1d469dd215f8b 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.HCU 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 03bb179a0e5bfcd107b7921e935a1224c025eead636c923541c1d469dd215f8b 3 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 03bb179a0e5bfcd107b7921e935a1224c025eead636c923541c1d469dd215f8b 3 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=03bb179a0e5bfcd107b7921e935a1224c025eead636c923541c1d469dd215f8b 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.HCU 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.HCU 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.HCU 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 93140 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 93140 ']' 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:01.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:01.038 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:01.312 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:01.312 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:56:01.312 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:56:01.312 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DDC 00:56:01.312 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:01.312 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:01.312 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:01.312 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.PaL ]] 00:56:01.312 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PaL 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.SBl 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.CfU ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CfU 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.DEf 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.RP0 ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RP0 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.IXq 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.LFb ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.LFb 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.HCU 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:56:01.313 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:56:01.880 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:56:01.880 Waiting for block devices as requested 00:56:01.880 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:56:02.139 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:56:02.707 No valid GPT data, bailing 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:56:02.707 No valid GPT data, bailing 00:56:02.707 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:56:02.968 No valid GPT data, bailing 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:56:02.968 No valid GPT data, bailing 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -a 10.0.0.1 -t tcp -s 4420 00:56:02.968 00:56:02.968 Discovery Log Number of Records 2, Generation counter 2 00:56:02.968 =====Discovery Log Entry 0====== 00:56:02.968 trtype: tcp 00:56:02.968 adrfam: ipv4 00:56:02.968 subtype: current discovery subsystem 00:56:02.968 treq: not specified, sq flow control disable supported 00:56:02.968 portid: 1 00:56:02.968 trsvcid: 4420 00:56:02.968 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:56:02.968 traddr: 10.0.0.1 00:56:02.968 eflags: none 00:56:02.968 sectype: none 00:56:02.968 =====Discovery Log Entry 1====== 00:56:02.968 trtype: tcp 00:56:02.968 adrfam: ipv4 00:56:02.968 subtype: nvme subsystem 00:56:02.968 treq: not specified, sq flow control disable supported 00:56:02.968 portid: 1 00:56:02.968 trsvcid: 4420 00:56:02.968 subnqn: nqn.2024-02.io.spdk:cnode0 00:56:02.968 traddr: 10.0.0.1 00:56:02.968 eflags: none 00:56:02.968 sectype: none 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:56:02.968 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:56:02.969 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:56:02.969 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:02.969 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:02.969 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:02.969 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:02.969 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:02.969 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:02.969 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:02.969 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.228 nvme0n1 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.228 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:03.488 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.489 nvme0n1 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.489 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.749 nvme0n1 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:03.749 nvme0n1 00:56:03.749 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:56:04.007 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.008 nvme0n1 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.008 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.008 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.267 nvme0n1 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:04.267 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.526 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.785 nvme0n1 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.785 nvme0n1 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:04.785 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.044 nvme0n1 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.044 10:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:05.044 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.045 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.302 nvme0n1 00:56:05.302 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.302 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:05.302 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.302 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.302 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.303 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.561 nvme0n1 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:05.561 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:05.820 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:05.820 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:05.820 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:05.820 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:56:05.820 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:05.820 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:05.820 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:05.820 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:05.820 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:05.820 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:56:05.820 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:05.820 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.079 10:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.079 nvme0n1 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.079 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.338 nvme0n1 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.338 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.597 nvme0n1 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:06.597 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:06.598 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:06.598 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:06.598 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:06.598 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:56:06.598 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:06.598 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:06.598 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:06.598 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:06.598 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:06.598 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:56:06.598 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.857 nvme0n1 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.857 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:07.115 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:07.115 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:07.115 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:07.115 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:07.115 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:07.115 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:07.116 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:07.116 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:07.116 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:07.116 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:07.116 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:07.116 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:07.116 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:07.116 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:07.116 10:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:07.116 nvme0n1 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:07.116 10:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:08.513 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:08.771 nvme0n1 00:56:08.771 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:08.771 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:08.771 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:08.771 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:08.771 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:08.771 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:09.031 10:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:09.291 nvme0n1 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:09.291 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:09.551 nvme0n1 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:56:09.551 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:09.811 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:09.812 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:09.812 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:09.812 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:09.812 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:09.812 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:09.812 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:09.812 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:09.812 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:09.812 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:09.812 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:10.071 nvme0n1 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:10.071 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:10.072 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:10.072 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:10.072 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:10.072 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:10.072 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:10.072 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:10.072 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:10.072 10:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:10.330 nvme0n1 00:56:10.330 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:10.330 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:10.330 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:10.330 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:10.330 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:10.330 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:10.330 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:10.330 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:10.330 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:10.330 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:10.588 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:10.589 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:11.156 nvme0n1 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:11.156 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:11.157 10:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:11.723 nvme0n1 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:11.723 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:11.724 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:11.724 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:11.724 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:11.724 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:11.724 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:11.724 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:11.724 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:11.724 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:11.724 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:11.724 10:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:12.292 nvme0n1 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:12.292 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:12.859 nvme0n1 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:12.859 10:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.427 nvme0n1 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.427 nvme0n1 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.427 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:13.686 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.687 nvme0n1 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.687 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.946 nvme0n1 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:13.946 10:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.207 nvme0n1 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:14.207 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.208 nvme0n1 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:14.208 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.480 nvme0n1 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:14.480 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:14.481 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:14.481 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.481 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.739 nvme0n1 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.739 nvme0n1 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.739 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:14.998 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:14.999 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.999 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.999 nvme0n1 00:56:14.999 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.999 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:14.999 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.999 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:14.999 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.999 10:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:14.999 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:15.257 nvme0n1 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:15.257 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.258 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:15.517 nvme0n1 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.517 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:15.777 nvme0n1 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:15.777 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.036 nvme0n1 00:56:16.036 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.036 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:16.036 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:16.036 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.036 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.036 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.036 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:16.036 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:16.036 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.036 10:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.036 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.296 nvme0n1 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.296 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.555 nvme0n1 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.555 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.814 nvme0n1 00:56:16.814 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:16.814 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:16.814 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:16.814 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:16.814 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:16.814 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:17.078 10:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:17.337 nvme0n1 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:17.337 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:17.595 nvme0n1 00:56:17.595 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:17.595 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:17.595 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:17.595 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:17.595 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:17.595 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:17.854 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:18.123 nvme0n1 00:56:18.123 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:18.123 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:18.123 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:18.123 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:18.123 10:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:18.123 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:18.124 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:18.382 nvme0n1 00:56:18.382 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:18.382 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:18.382 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:18.382 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:18.382 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:18.382 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:18.382 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:18.382 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:18.382 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:18.382 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:18.641 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:19.207 nvme0n1 00:56:19.207 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:19.207 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:19.207 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:19.207 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:19.207 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:19.207 10:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:19.207 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:19.774 nvme0n1 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:19.774 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:19.775 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:19.775 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:19.775 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:19.775 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:19.775 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:19.775 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:19.775 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:19.775 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:19.775 10:07:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:20.341 nvme0n1 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:20.341 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:20.342 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:20.912 nvme0n1 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:20.912 10:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:21.483 nvme0n1 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:21.483 nvme0n1 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:21.483 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:21.741 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:21.742 nvme0n1 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:21.742 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.000 nvme0n1 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:22.000 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.001 10:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.259 nvme0n1 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.259 nvme0n1 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.259 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:22.518 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.519 nvme0n1 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.519 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.778 nvme0n1 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:22.778 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:22.779 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.038 nvme0n1 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:23.038 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.039 10:07:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.039 nvme0n1 00:56:23.039 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.039 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:23.039 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:23.039 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.039 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.039 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:23.297 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.298 nvme0n1 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.298 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.557 nvme0n1 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.557 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.816 nvme0n1 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:23.816 10:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.075 nvme0n1 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.075 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.333 nvme0n1 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.333 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.591 nvme0n1 00:56:24.591 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.591 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:24.591 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:24.591 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.592 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:24.850 nvme0n1 00:56:24.850 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:24.850 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:24.850 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:24.850 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:24.850 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:25.109 10:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:25.369 nvme0n1 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:25.369 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:25.370 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:25.628 nvme0n1 00:56:25.628 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:25.628 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:25.628 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:25.628 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:25.628 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:25.628 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:25.887 10:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:26.145 nvme0n1 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:26.145 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:26.146 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:26.404 nvme0n1 00:56:26.404 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:26.404 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:26.404 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:26.404 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:26.404 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:26.404 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg4Njg4YzI0NTNiNWRjNjk0MTM4ZTJhNjdlZDNlNTLy+Kbr: 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: ]] 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ0NzI2Mzc3N2NhMjQ4YzM3ODIwMmIwMGQzMTQ4ODRlZjM1MDUwODkwYTAwOGI3MDU0MDM2OTU2Zjg1MjZhNJ9OG9E=: 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:26.663 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:26.664 10:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:27.231 nvme0n1 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:27.231 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:27.232 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:27.232 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:27.232 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:27.232 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:27.232 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:27.232 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:27.232 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:27.232 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:27.232 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:56:27.232 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:27.232 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:27.799 nvme0n1 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:27.799 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:27.800 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:27.800 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:27.800 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:27.800 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:27.800 10:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:28.412 nvme0n1 00:56:28.412 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZmYjMxMjYwMTFhMmRlYmUyZjU1NjAxZGZhMDczYjJmMzhlMGYxZTE3YzhiNDUwCPrKjQ==: 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: ]] 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDZkNGRhNjExY2Y2OGFlZGY2MWI0N2QwYWZiMmRjYjebYf4B: 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:28.413 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:28.981 nvme0n1 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDNiYjE3OWEwZTViZmNkMTA3Yjc5MjFlOTM1YTEyMjRjMDI1ZWVhZDYzNmM5MjM1NDFjMWQ0NjlkZDIxNWY4YlJjjsw=: 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:56:28.981 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:28.982 10:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:29.550 nvme0n1 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:29.550 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:29.550 2024/12/09 10:07:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:56:29.550 request: 00:56:29.550 { 00:56:29.550 "method": "bdev_nvme_attach_controller", 00:56:29.550 "params": { 00:56:29.550 "name": "nvme0", 00:56:29.550 "trtype": "tcp", 00:56:29.550 "traddr": "10.0.0.1", 00:56:29.550 "adrfam": "ipv4", 00:56:29.550 "trsvcid": "4420", 00:56:29.550 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:56:29.550 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:56:29.550 "prchk_reftag": false, 00:56:29.551 "prchk_guard": false, 00:56:29.551 "hdgst": false, 00:56:29.551 "ddgst": false, 00:56:29.551 "allow_unrecognized_csi": false 00:56:29.551 } 00:56:29.551 } 00:56:29.551 Got JSON-RPC error response 00:56:29.551 GoRPCClient: error on JSON-RPC call 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:29.551 2024/12/09 10:07:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:56:29.551 request: 00:56:29.551 { 00:56:29.551 "method": "bdev_nvme_attach_controller", 00:56:29.551 "params": { 00:56:29.551 "name": "nvme0", 00:56:29.551 "trtype": "tcp", 00:56:29.551 "traddr": "10.0.0.1", 00:56:29.551 "adrfam": "ipv4", 00:56:29.551 "trsvcid": "4420", 00:56:29.551 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:56:29.551 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:56:29.551 "prchk_reftag": false, 00:56:29.551 "prchk_guard": false, 00:56:29.551 "hdgst": false, 00:56:29.551 "ddgst": false, 00:56:29.551 "dhchap_key": "key2", 00:56:29.551 "allow_unrecognized_csi": false 00:56:29.551 } 00:56:29.551 } 00:56:29.551 Got JSON-RPC error response 00:56:29.551 GoRPCClient: error on JSON-RPC call 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:29.551 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:29.811 2024/12/09 10:07:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:56:29.811 request: 00:56:29.811 { 00:56:29.811 "method": "bdev_nvme_attach_controller", 00:56:29.811 "params": { 00:56:29.811 "name": "nvme0", 00:56:29.811 "trtype": "tcp", 00:56:29.811 "traddr": "10.0.0.1", 00:56:29.811 "adrfam": "ipv4", 00:56:29.811 "trsvcid": "4420", 00:56:29.811 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:56:29.811 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:56:29.811 "prchk_reftag": false, 00:56:29.811 "prchk_guard": false, 00:56:29.811 "hdgst": false, 00:56:29.811 "ddgst": false, 00:56:29.811 "dhchap_key": "key1", 00:56:29.811 "dhchap_ctrlr_key": "ckey2", 00:56:29.811 "allow_unrecognized_csi": false 00:56:29.811 } 00:56:29.811 } 00:56:29.811 Got JSON-RPC error response 00:56:29.811 GoRPCClient: error on JSON-RPC call 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:56:29.811 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:29.812 nvme0n1 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:29.812 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:30.071 2024/12/09 10:07:36 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:56:30.072 request: 00:56:30.072 { 00:56:30.072 "method": "bdev_nvme_set_keys", 00:56:30.072 "params": { 00:56:30.072 "name": "nvme0", 00:56:30.072 "dhchap_key": "key1", 00:56:30.072 "dhchap_ctrlr_key": "ckey2" 00:56:30.072 } 00:56:30.072 } 00:56:30.072 Got JSON-RPC error response 00:56:30.072 GoRPCClient: error on JSON-RPC call 00:56:30.072 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:56:30.072 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:56:30.072 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:56:30.072 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:56:30.072 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:56:30.072 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:56:30.072 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:56:30.072 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:30.072 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:30.072 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:30.072 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:56:30.072 10:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIzNjM3M2MwOTkxYTgyNDU0YjNjMGIzNjc3Mzc3ZDY5YjkwZTkzNzhhNTdhMTZmHezTSQ==: 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: ]] 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODlhMWY0ODRlMWEyMWI5NTBhMTlkZGNkNjVjZjUwNjI5NDQwMzMyOTYyZjkyOTE56/SZ6Q==: 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:56:31.008 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:31.009 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:31.009 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:56:31.009 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:31.009 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:56:31.009 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:56:31.009 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:56:31.009 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:56:31.009 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:31.009 10:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:31.267 nvme0n1 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGU2YzVlM2E3Zjg3OGQ1NDVlYzNkMmI4MGYwM2ZlMWI/RIEC: 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: ]] 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1YWQ2NjQ0MGJmZmVjZDIwYjYyNGVlY2RlYzI1ODWVHhZp: 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:56:31.267 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:31.268 2024/12/09 10:07:38 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:56:31.268 request: 00:56:31.268 { 00:56:31.268 "method": "bdev_nvme_set_keys", 00:56:31.268 "params": { 00:56:31.268 "name": "nvme0", 00:56:31.268 "dhchap_key": "key2", 00:56:31.268 "dhchap_ctrlr_key": "ckey1" 00:56:31.268 } 00:56:31.268 } 00:56:31.268 Got JSON-RPC error response 00:56:31.268 GoRPCClient: error on JSON-RPC call 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:56:31.268 10:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:56:32.206 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:56:32.206 rmmod nvme_tcp 00:56:32.206 rmmod nvme_fabrics 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 93140 ']' 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 93140 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 93140 ']' 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 93140 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93140 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93140' 00:56:32.466 killing process with pid 93140 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 93140 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 93140 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:56:32.466 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:56:32.726 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:56:32.986 10:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:56:33.555 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:56:33.814 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:56:33.814 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:56:33.814 10:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.DDC /tmp/spdk.key-null.SBl /tmp/spdk.key-sha256.DEf /tmp/spdk.key-sha384.IXq /tmp/spdk.key-sha512.HCU /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:56:33.814 10:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:56:34.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:56:34.382 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:56:34.382 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:56:34.382 00:56:34.382 real 0m35.870s 00:56:34.382 user 0m33.260s 00:56:34.382 sys 0m5.057s 00:56:34.382 10:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:34.382 ************************************ 00:56:34.382 END TEST nvmf_auth_host 00:56:34.382 ************************************ 00:56:34.382 10:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:56:34.642 ************************************ 00:56:34.642 START TEST nvmf_digest 00:56:34.642 ************************************ 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:56:34.642 * Looking for test storage... 00:56:34.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:56:34.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:34.642 --rc genhtml_branch_coverage=1 00:56:34.642 --rc genhtml_function_coverage=1 00:56:34.642 --rc genhtml_legend=1 00:56:34.642 --rc geninfo_all_blocks=1 00:56:34.642 --rc geninfo_unexecuted_blocks=1 00:56:34.642 00:56:34.642 ' 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:56:34.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:34.642 --rc genhtml_branch_coverage=1 00:56:34.642 --rc genhtml_function_coverage=1 00:56:34.642 --rc genhtml_legend=1 00:56:34.642 --rc geninfo_all_blocks=1 00:56:34.642 --rc geninfo_unexecuted_blocks=1 00:56:34.642 00:56:34.642 ' 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:56:34.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:34.642 --rc genhtml_branch_coverage=1 00:56:34.642 --rc genhtml_function_coverage=1 00:56:34.642 --rc genhtml_legend=1 00:56:34.642 --rc geninfo_all_blocks=1 00:56:34.642 --rc geninfo_unexecuted_blocks=1 00:56:34.642 00:56:34.642 ' 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:56:34.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:34.642 --rc genhtml_branch_coverage=1 00:56:34.642 --rc genhtml_function_coverage=1 00:56:34.642 --rc genhtml_legend=1 00:56:34.642 --rc geninfo_all_blocks=1 00:56:34.642 --rc geninfo_unexecuted_blocks=1 00:56:34.642 00:56:34.642 ' 00:56:34.642 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:56:34.903 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:56:34.903 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:56:34.904 Cannot find device "nvmf_init_br" 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:56:34.904 Cannot find device "nvmf_init_br2" 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:56:34.904 Cannot find device "nvmf_tgt_br" 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:56:34.904 Cannot find device "nvmf_tgt_br2" 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:56:34.904 Cannot find device "nvmf_init_br" 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:56:34.904 Cannot find device "nvmf_init_br2" 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:56:34.904 Cannot find device "nvmf_tgt_br" 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:56:34.904 Cannot find device "nvmf_tgt_br2" 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:56:34.904 Cannot find device "nvmf_br" 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:56:34.904 Cannot find device "nvmf_init_if" 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:56:34.904 Cannot find device "nvmf_init_if2" 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:34.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:34.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:56:34.904 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:56:35.164 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:56:35.164 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:56:35.164 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:56:35.164 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:56:35.164 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:56:35.164 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:56:35.164 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:56:35.164 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:56:35.164 00:56:35.164 --- 10.0.0.3 ping statistics --- 00:56:35.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:35.164 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:56:35.164 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:56:35.164 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:56:35.164 00:56:35.164 --- 10.0.0.4 ping statistics --- 00:56:35.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:35.164 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:56:35.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:35.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:56:35.164 00:56:35.164 --- 10.0.0.1 ping statistics --- 00:56:35.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:35.164 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:56:35.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:35.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:56:35.164 00:56:35.164 --- 10.0.0.2 ping statistics --- 00:56:35.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:35.164 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:56:35.164 ************************************ 00:56:35.164 START TEST nvmf_digest_clean 00:56:35.164 ************************************ 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:56:35.164 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=94809 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 94809 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94809 ']' 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:35.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:35.165 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:56:35.165 [2024-12-09 10:07:42.194496] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:56:35.165 [2024-12-09 10:07:42.195016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:35.424 [2024-12-09 10:07:42.344327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:35.424 [2024-12-09 10:07:42.394491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:35.424 [2024-12-09 10:07:42.394593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:35.424 [2024-12-09 10:07:42.394629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:35.424 [2024-12-09 10:07:42.394655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:35.424 [2024-12-09 10:07:42.394671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:35.424 [2024-12-09 10:07:42.394962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:56:36.363 null0 00:56:36.363 [2024-12-09 10:07:43.277258] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:36.363 [2024-12-09 10:07:43.301323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94859 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94859 /var/tmp/bperf.sock 00:56:36.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94859 ']' 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:36.363 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:56:36.363 [2024-12-09 10:07:43.361567] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:56:36.363 [2024-12-09 10:07:43.361698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94859 ] 00:56:36.623 [2024-12-09 10:07:43.510610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:36.623 [2024-12-09 10:07:43.566423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:37.560 10:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:37.560 10:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:56:37.560 10:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:56:37.560 10:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:56:37.560 10:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:56:37.560 10:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:56:37.560 10:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:56:37.819 nvme0n1 00:56:38.188 10:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:56:38.188 10:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:56:38.188 Running I/O for 2 seconds... 00:56:40.073 23946.00 IOPS, 93.54 MiB/s [2024-12-09T10:07:47.117Z] 24126.00 IOPS, 94.24 MiB/s 00:56:40.073 Latency(us) 00:56:40.073 [2024-12-09T10:07:47.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:40.073 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:56:40.073 nvme0n1 : 2.00 24153.90 94.35 0.00 0.00 5294.25 2718.74 8642.74 00:56:40.073 [2024-12-09T10:07:47.117Z] =================================================================================================================== 00:56:40.073 [2024-12-09T10:07:47.117Z] Total : 24153.90 94.35 0.00 0.00 5294.25 2718.74 8642.74 00:56:40.073 { 00:56:40.073 "results": [ 00:56:40.073 { 00:56:40.073 "job": "nvme0n1", 00:56:40.073 "core_mask": "0x2", 00:56:40.073 "workload": "randread", 00:56:40.073 "status": "finished", 00:56:40.073 "queue_depth": 128, 00:56:40.073 "io_size": 4096, 00:56:40.073 "runtime": 2.002989, 00:56:40.073 "iops": 24153.901993470758, 00:56:40.073 "mibps": 94.35117966199515, 00:56:40.073 "io_failed": 0, 00:56:40.073 "io_timeout": 0, 00:56:40.073 "avg_latency_us": 5294.25074954283, 00:56:40.073 "min_latency_us": 2718.7423580786026, 00:56:40.073 "max_latency_us": 8642.738864628822 00:56:40.073 } 00:56:40.073 ], 00:56:40.073 "core_count": 1 00:56:40.073 } 00:56:40.073 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:56:40.073 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:56:40.073 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:56:40.073 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:56:40.073 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:56:40.073 | select(.opcode=="crc32c") 00:56:40.073 | "\(.module_name) \(.executed)"' 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94859 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94859 ']' 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94859 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94859 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94859' 00:56:40.333 killing process with pid 94859 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94859 00:56:40.333 Received shutdown signal, test time was about 2.000000 seconds 00:56:40.333 00:56:40.333 Latency(us) 00:56:40.333 [2024-12-09T10:07:47.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:40.333 [2024-12-09T10:07:47.377Z] =================================================================================================================== 00:56:40.333 [2024-12-09T10:07:47.377Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:56:40.333 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94859 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94945 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94945 /var/tmp/bperf.sock 00:56:40.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94945 ']' 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:40.593 10:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:56:40.593 [2024-12-09 10:07:47.490585] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:56:40.593 [2024-12-09 10:07:47.490736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:56:40.593 Zero copy mechanism will not be used. 00:56:40.593 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94945 ] 00:56:40.593 [2024-12-09 10:07:47.627344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:40.852 [2024-12-09 10:07:47.679741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:41.420 10:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:41.420 10:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:56:41.420 10:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:56:41.420 10:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:56:41.420 10:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:56:41.679 10:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:56:41.679 10:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:56:41.939 nvme0n1 00:56:41.939 10:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:56:41.939 10:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:56:42.198 I/O size of 131072 is greater than zero copy threshold (65536). 00:56:42.198 Zero copy mechanism will not be used. 00:56:42.198 Running I/O for 2 seconds... 00:56:44.075 9621.00 IOPS, 1202.62 MiB/s [2024-12-09T10:07:51.119Z] 9458.50 IOPS, 1182.31 MiB/s 00:56:44.075 Latency(us) 00:56:44.075 [2024-12-09T10:07:51.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:44.075 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:56:44.075 nvme0n1 : 2.00 9457.35 1182.17 0.00 0.00 1689.14 493.67 8242.08 00:56:44.075 [2024-12-09T10:07:51.119Z] =================================================================================================================== 00:56:44.075 [2024-12-09T10:07:51.119Z] Total : 9457.35 1182.17 0.00 0.00 1689.14 493.67 8242.08 00:56:44.075 { 00:56:44.075 "results": [ 00:56:44.075 { 00:56:44.075 "job": "nvme0n1", 00:56:44.075 "core_mask": "0x2", 00:56:44.075 "workload": "randread", 00:56:44.075 "status": "finished", 00:56:44.075 "queue_depth": 16, 00:56:44.075 "io_size": 131072, 00:56:44.075 "runtime": 2.001935, 00:56:44.075 "iops": 9457.35001386159, 00:56:44.075 "mibps": 1182.1687517326986, 00:56:44.075 "io_failed": 0, 00:56:44.075 "io_timeout": 0, 00:56:44.075 "avg_latency_us": 1689.1448200814777, 00:56:44.075 "min_latency_us": 493.6663755458515, 00:56:44.075 "max_latency_us": 8242.08209606987 00:56:44.075 } 00:56:44.075 ], 00:56:44.075 "core_count": 1 00:56:44.075 } 00:56:44.075 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:56:44.075 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:56:44.075 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:56:44.075 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:56:44.075 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:56:44.075 | select(.opcode=="crc32c") 00:56:44.075 | "\(.module_name) \(.executed)"' 00:56:44.334 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:56:44.334 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:56:44.334 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:56:44.334 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:56:44.334 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94945 00:56:44.334 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94945 ']' 00:56:44.334 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94945 00:56:44.334 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:56:44.334 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:44.334 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94945 00:56:44.334 killing process with pid 94945 00:56:44.334 Received shutdown signal, test time was about 2.000000 seconds 00:56:44.334 00:56:44.334 Latency(us) 00:56:44.334 [2024-12-09T10:07:51.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:44.334 [2024-12-09T10:07:51.378Z] =================================================================================================================== 00:56:44.334 [2024-12-09T10:07:51.379Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:56:44.335 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:56:44.335 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:56:44.335 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94945' 00:56:44.335 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94945 00:56:44.335 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94945 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95035 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95035 /var/tmp/bperf.sock 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95035 ']' 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:56:44.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:44.594 10:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:56:44.594 [2024-12-09 10:07:51.552615] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:56:44.594 [2024-12-09 10:07:51.552770] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95035 ] 00:56:44.854 [2024-12-09 10:07:51.680280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:44.854 [2024-12-09 10:07:51.731373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:45.421 10:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:45.421 10:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:56:45.421 10:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:56:45.421 10:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:56:45.421 10:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:56:45.988 10:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:56:45.988 10:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:56:46.247 nvme0n1 00:56:46.247 10:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:56:46.247 10:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:56:46.247 Running I/O for 2 seconds... 00:56:48.115 29107.00 IOPS, 113.70 MiB/s [2024-12-09T10:07:55.159Z] 28770.00 IOPS, 112.38 MiB/s 00:56:48.115 Latency(us) 00:56:48.115 [2024-12-09T10:07:55.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:48.115 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:48.115 nvme0n1 : 2.00 28768.05 112.38 0.00 0.00 4443.29 2260.85 8184.85 00:56:48.115 [2024-12-09T10:07:55.159Z] =================================================================================================================== 00:56:48.115 [2024-12-09T10:07:55.159Z] Total : 28768.05 112.38 0.00 0.00 4443.29 2260.85 8184.85 00:56:48.115 { 00:56:48.116 "results": [ 00:56:48.116 { 00:56:48.116 "job": "nvme0n1", 00:56:48.116 "core_mask": "0x2", 00:56:48.116 "workload": "randwrite", 00:56:48.116 "status": "finished", 00:56:48.116 "queue_depth": 128, 00:56:48.116 "io_size": 4096, 00:56:48.116 "runtime": 2.004585, 00:56:48.116 "iops": 28768.049247101022, 00:56:48.116 "mibps": 112.37519237148837, 00:56:48.116 "io_failed": 0, 00:56:48.116 "io_timeout": 0, 00:56:48.116 "avg_latency_us": 4443.28719594438, 00:56:48.116 "min_latency_us": 2260.848908296943, 00:56:48.116 "max_latency_us": 8184.845414847162 00:56:48.116 } 00:56:48.116 ], 00:56:48.116 "core_count": 1 00:56:48.116 } 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:56:48.373 | select(.opcode=="crc32c") 00:56:48.373 | "\(.module_name) \(.executed)"' 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95035 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95035 ']' 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95035 00:56:48.373 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95035 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:56:48.631 killing process with pid 95035 00:56:48.631 Received shutdown signal, test time was about 2.000000 seconds 00:56:48.631 00:56:48.631 Latency(us) 00:56:48.631 [2024-12-09T10:07:55.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:48.631 [2024-12-09T10:07:55.675Z] =================================================================================================================== 00:56:48.631 [2024-12-09T10:07:55.675Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95035' 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95035 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95035 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95121 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95121 /var/tmp/bperf.sock 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95121 ']' 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:56:48.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:48.631 10:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:56:48.890 I/O size of 131072 is greater than zero copy threshold (65536). 00:56:48.890 Zero copy mechanism will not be used. 00:56:48.890 [2024-12-09 10:07:55.676999] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:56:48.890 [2024-12-09 10:07:55.677072] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95121 ] 00:56:48.890 [2024-12-09 10:07:55.806183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:48.890 [2024-12-09 10:07:55.856504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:49.836 10:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:49.836 10:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:56:49.836 10:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:56:49.836 10:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:56:49.836 10:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:56:50.120 10:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:56:50.120 10:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:56:50.120 nvme0n1 00:56:50.120 10:07:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:56:50.120 10:07:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:56:50.395 I/O size of 131072 is greater than zero copy threshold (65536). 00:56:50.395 Zero copy mechanism will not be used. 00:56:50.395 Running I/O for 2 seconds... 00:56:52.268 7296.00 IOPS, 912.00 MiB/s [2024-12-09T10:07:59.312Z] 7752.00 IOPS, 969.00 MiB/s 00:56:52.268 Latency(us) 00:56:52.268 [2024-12-09T10:07:59.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:52.268 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:56:52.268 nvme0n1 : 2.00 7747.67 968.46 0.00 0.00 2061.68 1631.25 7240.44 00:56:52.268 [2024-12-09T10:07:59.312Z] =================================================================================================================== 00:56:52.268 [2024-12-09T10:07:59.312Z] Total : 7747.67 968.46 0.00 0.00 2061.68 1631.25 7240.44 00:56:52.268 { 00:56:52.268 "results": [ 00:56:52.268 { 00:56:52.268 "job": "nvme0n1", 00:56:52.268 "core_mask": "0x2", 00:56:52.268 "workload": "randwrite", 00:56:52.268 "status": "finished", 00:56:52.268 "queue_depth": 16, 00:56:52.268 "io_size": 131072, 00:56:52.268 "runtime": 2.003054, 00:56:52.268 "iops": 7747.66930896521, 00:56:52.268 "mibps": 968.4586636206512, 00:56:52.268 "io_failed": 0, 00:56:52.268 "io_timeout": 0, 00:56:52.268 "avg_latency_us": 2061.6753816634405, 00:56:52.268 "min_latency_us": 1631.2454148471616, 00:56:52.268 "max_latency_us": 7240.440174672489 00:56:52.268 } 00:56:52.268 ], 00:56:52.268 "core_count": 1 00:56:52.268 } 00:56:52.268 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:56:52.268 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:56:52.268 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:56:52.268 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:56:52.268 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:56:52.268 | select(.opcode=="crc32c") 00:56:52.268 | "\(.module_name) \(.executed)"' 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95121 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95121 ']' 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95121 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95121 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:56:52.528 killing process with pid 95121 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95121' 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95121 00:56:52.528 Received shutdown signal, test time was about 2.000000 seconds 00:56:52.528 00:56:52.528 Latency(us) 00:56:52.528 [2024-12-09T10:07:59.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:52.528 [2024-12-09T10:07:59.572Z] =================================================================================================================== 00:56:52.528 [2024-12-09T10:07:59.572Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:56:52.528 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95121 00:56:52.788 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94809 00:56:52.788 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94809 ']' 00:56:52.788 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94809 00:56:52.788 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:56:52.788 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:52.788 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94809 00:56:52.788 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:56:52.788 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:56:52.788 killing process with pid 94809 00:56:52.788 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94809' 00:56:52.788 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94809 00:56:52.788 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94809 00:56:53.048 00:56:53.048 real 0m17.787s 00:56:53.048 user 0m34.050s 00:56:53.048 sys 0m4.353s 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:56:53.048 ************************************ 00:56:53.048 END TEST nvmf_digest_clean 00:56:53.048 ************************************ 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:56:53.048 ************************************ 00:56:53.048 START TEST nvmf_digest_error 00:56:53.048 ************************************ 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=95240 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 95240 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95240 ']' 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:53.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:53.048 10:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:56:53.048 [2024-12-09 10:08:00.049975] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:56:53.049 [2024-12-09 10:08:00.050051] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:53.308 [2024-12-09 10:08:00.197010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:53.308 [2024-12-09 10:08:00.246355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:53.308 [2024-12-09 10:08:00.246406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:53.308 [2024-12-09 10:08:00.246412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:53.308 [2024-12-09 10:08:00.246416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:53.308 [2024-12-09 10:08:00.246420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:53.308 [2024-12-09 10:08:00.246686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:54.247 10:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:54.247 10:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:56:54.247 10:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:54.247 10:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:54.247 10:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:56:54.247 10:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:54.247 10:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:56:54.247 10:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:54.247 10:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:56:54.247 [2024-12-09 10:08:00.997665] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:56:54.247 null0 00:56:54.247 [2024-12-09 10:08:01.096603] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:54.247 [2024-12-09 10:08:01.120714] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95284 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95284 /var/tmp/bperf.sock 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95284 ']' 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:54.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:54.247 10:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:56:54.247 [2024-12-09 10:08:01.182620] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:56:54.247 [2024-12-09 10:08:01.182688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95284 ] 00:56:54.506 [2024-12-09 10:08:01.328228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:54.506 [2024-12-09 10:08:01.372993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:55.075 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:55.075 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:56:55.075 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:56:55.075 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:56:55.335 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:56:55.335 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:55.335 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:56:55.335 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:55.335 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:56:55.335 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:56:55.594 nvme0n1 00:56:55.594 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:56:55.594 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:55.594 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:56:55.594 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:55.594 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:56:55.594 10:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:56:55.854 Running I/O for 2 seconds... 00:56:55.854 [2024-12-09 10:08:02.708103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.854 [2024-12-09 10:08:02.708152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.854 [2024-12-09 10:08:02.708162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.854 [2024-12-09 10:08:02.717612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.854 [2024-12-09 10:08:02.717645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.854 [2024-12-09 10:08:02.717669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.854 [2024-12-09 10:08:02.728311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.854 [2024-12-09 10:08:02.728347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.854 [2024-12-09 10:08:02.728354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.854 [2024-12-09 10:08:02.739281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.854 [2024-12-09 10:08:02.739337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.854 [2024-12-09 10:08:02.739346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.854 [2024-12-09 10:08:02.749800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.854 [2024-12-09 10:08:02.749832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.854 [2024-12-09 10:08:02.749855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.854 [2024-12-09 10:08:02.759737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.854 [2024-12-09 10:08:02.759786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.854 [2024-12-09 10:08:02.759794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.855 [2024-12-09 10:08:02.770060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.855 [2024-12-09 10:08:02.770093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.855 [2024-12-09 10:08:02.770100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.855 [2024-12-09 10:08:02.781174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.855 [2024-12-09 10:08:02.781208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.855 [2024-12-09 10:08:02.781216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.855 [2024-12-09 10:08:02.791566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.855 [2024-12-09 10:08:02.791599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.855 [2024-12-09 10:08:02.791607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.855 [2024-12-09 10:08:02.800410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.855 [2024-12-09 10:08:02.800445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.855 [2024-12-09 10:08:02.800453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.855 [2024-12-09 10:08:02.813002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.855 [2024-12-09 10:08:02.813037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.855 [2024-12-09 10:08:02.813044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.855 [2024-12-09 10:08:02.822322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.855 [2024-12-09 10:08:02.822355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.855 [2024-12-09 10:08:02.822363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.855 [2024-12-09 10:08:02.832751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.855 [2024-12-09 10:08:02.832787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.855 [2024-12-09 10:08:02.832795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.855 [2024-12-09 10:08:02.844122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.855 [2024-12-09 10:08:02.844158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.855 [2024-12-09 10:08:02.844166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.855 [2024-12-09 10:08:02.856338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.855 [2024-12-09 10:08:02.856380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.855 [2024-12-09 10:08:02.856390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.855 [2024-12-09 10:08:02.867884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.855 [2024-12-09 10:08:02.867923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.855 [2024-12-09 10:08:02.867932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.855 [2024-12-09 10:08:02.879301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.855 [2024-12-09 10:08:02.879338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.855 [2024-12-09 10:08:02.879347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:55.855 [2024-12-09 10:08:02.890802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:55.855 [2024-12-09 10:08:02.890836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:55.855 [2024-12-09 10:08:02.890845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.115 [2024-12-09 10:08:02.902419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.115 [2024-12-09 10:08:02.902453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.115 [2024-12-09 10:08:02.902477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.115 [2024-12-09 10:08:02.913896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.115 [2024-12-09 10:08:02.913933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.115 [2024-12-09 10:08:02.913942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.115 [2024-12-09 10:08:02.925257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.115 [2024-12-09 10:08:02.925292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.115 [2024-12-09 10:08:02.925300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.115 [2024-12-09 10:08:02.935852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.115 [2024-12-09 10:08:02.935890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.115 [2024-12-09 10:08:02.935899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.115 [2024-12-09 10:08:02.945779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.115 [2024-12-09 10:08:02.945815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:02.945823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:02.956678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:02.956714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:02.956723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:02.969185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:02.969221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:02.969230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:02.980259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:02.980293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:02.980304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:02.994664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:02.994695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:02.994705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.006369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.006399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.006407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.017870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.017899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.017907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.025971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.026002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.026010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.038276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.038306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.038314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.049163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.049193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.049201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.059937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.059968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.059976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.070797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.070827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.070834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.081314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.081343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.081351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.092173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.092204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.092212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.103195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.103225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.103233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.114359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.114390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.114398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.122180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.122209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.122217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.133784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.133814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.133821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.144569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.144603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.144610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.116 [2024-12-09 10:08:03.155197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.116 [2024-12-09 10:08:03.155229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.116 [2024-12-09 10:08:03.155237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.165512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.165543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.165551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.175962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.175994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.176002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.187146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.187174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.187181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.197899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.197930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.197954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.208373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.208405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.208412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.219405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.219436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.219444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.229085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.229123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.229134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.241713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.241760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.241773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.252065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.252102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.252111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.262372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.262406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.262415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.273659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.273692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.273700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.284108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.284144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.284152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.294548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.294584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.294591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.305394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.305428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.305436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.316044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.316076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.316084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.326343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.326374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.326382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.335267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.335305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.335314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.345634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.345667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.345674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.355773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.355805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.355829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.366583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.366613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.366621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.376547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.377 [2024-12-09 10:08:03.376579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.377 [2024-12-09 10:08:03.376586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.377 [2024-12-09 10:08:03.387048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.378 [2024-12-09 10:08:03.387080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.378 [2024-12-09 10:08:03.387088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.378 [2024-12-09 10:08:03.397869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.378 [2024-12-09 10:08:03.397902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.378 [2024-12-09 10:08:03.397909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.378 [2024-12-09 10:08:03.408471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.378 [2024-12-09 10:08:03.408510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.378 [2024-12-09 10:08:03.408518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.378 [2024-12-09 10:08:03.418115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.378 [2024-12-09 10:08:03.418149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.378 [2024-12-09 10:08:03.418157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.638 [2024-12-09 10:08:03.428200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.638 [2024-12-09 10:08:03.428234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.638 [2024-12-09 10:08:03.428242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.638 [2024-12-09 10:08:03.439301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.638 [2024-12-09 10:08:03.439346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.638 [2024-12-09 10:08:03.439354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.638 [2024-12-09 10:08:03.450035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.638 [2024-12-09 10:08:03.450067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.638 [2024-12-09 10:08:03.450075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.638 [2024-12-09 10:08:03.460626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.460658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.460665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.471372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.471403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.471427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.480607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.480638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.480645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.490471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.490512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.490520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.500641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.500673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.500680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.510735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.510766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.510773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.521548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.521579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.521603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.533283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.533317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.533327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.545152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.545188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.545198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.556770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.556803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.556811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.568113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.568146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.568154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.576973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.577007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.577014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.587992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.588026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.588049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.598923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.598956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.598964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.610053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.610087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.610095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.620873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.620909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.620917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.632327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.632359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.632367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.643888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.643921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.643930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.654499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.654530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.654538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.664623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.664654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.664662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.639 [2024-12-09 10:08:03.673293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.639 [2024-12-09 10:08:03.673326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.639 [2024-12-09 10:08:03.673333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.685844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.685878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.685887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 23517.00 IOPS, 91.86 MiB/s [2024-12-09T10:08:03.944Z] [2024-12-09 10:08:03.697828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.697861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.697885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.708607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.708641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.708649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.718148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.718181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.718188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.728157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.728190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.728214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.738307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.738340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.738348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.749075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.749108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.749116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.759791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.759823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.759831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.770510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.770540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.770547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.780640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.780671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.780679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.789306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.789338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.789346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.800952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.800987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.800996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.812692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.812727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.812751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.825735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.825772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.825780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.835797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.835833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.835841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.847714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.847748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.847757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.858557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.858588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.858612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.869069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.869102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.869110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.879345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.879378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.879386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.889662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.889694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.889702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.900766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.900800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.900824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.900 [2024-12-09 10:08:03.911687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.900 [2024-12-09 10:08:03.911722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.900 [2024-12-09 10:08:03.911730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.901 [2024-12-09 10:08:03.921515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.901 [2024-12-09 10:08:03.921550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.901 [2024-12-09 10:08:03.921558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:56.901 [2024-12-09 10:08:03.933191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:56.901 [2024-12-09 10:08:03.933225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:56.901 [2024-12-09 10:08:03.933233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.161 [2024-12-09 10:08:03.944538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.161 [2024-12-09 10:08:03.944571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.161 [2024-12-09 10:08:03.944580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.161 [2024-12-09 10:08:03.956278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.161 [2024-12-09 10:08:03.956335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.161 [2024-12-09 10:08:03.956350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.161 [2024-12-09 10:08:03.968124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:03.968161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:03.968169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:03.979791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:03.979826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:03.979834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:03.989531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:03.989563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:03.989570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.001352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.001387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.001397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.013528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.013556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.013565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.026485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.026527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.026538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.039958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.040003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.040013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.052608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.052654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.052664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.062914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.062962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.062976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.073187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.073225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.073234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.084495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.084532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.084541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.095277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.095318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.095327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.105881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.105917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.105925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.117244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.117280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.117304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.127998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.128035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.128043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.139114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.139149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.139173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.148284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.148319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.148328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.159976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.160014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.160022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.171470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.171509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.171517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.181040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.181075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.181085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.192523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.192556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.192564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.162 [2024-12-09 10:08:04.203080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.162 [2024-12-09 10:08:04.203112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.162 [2024-12-09 10:08:04.203119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.214039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.214071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.214095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.224544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.224574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.224583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.236230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.236262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.236270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.246957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.246990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.247014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.257164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.257198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.257207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.269452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.269514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.269525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.280851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.280903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.280913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.292096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.292128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.292137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.302875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.302902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.302910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.313525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.313557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.313564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.324248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.324282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.324289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.334885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.334918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.334926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.345821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.345873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.345881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.357238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.357277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.357304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.368656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.368713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.368725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.381595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.381644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.381657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.393843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.393893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.393907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.406264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.406314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.406327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.418522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.418568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.418581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.430629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.430675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.430686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.441909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.441956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.441967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.452582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.452631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.452645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.423 [2024-12-09 10:08:04.463001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.423 [2024-12-09 10:08:04.463051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.423 [2024-12-09 10:08:04.463063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.683 [2024-12-09 10:08:04.475367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.683 [2024-12-09 10:08:04.475414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.683 [2024-12-09 10:08:04.475443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.683 [2024-12-09 10:08:04.486993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.683 [2024-12-09 10:08:04.487045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.683 [2024-12-09 10:08:04.487059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.683 [2024-12-09 10:08:04.497915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.683 [2024-12-09 10:08:04.497951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.683 [2024-12-09 10:08:04.497959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.683 [2024-12-09 10:08:04.508273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.683 [2024-12-09 10:08:04.508307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.683 [2024-12-09 10:08:04.508316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.683 [2024-12-09 10:08:04.519169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.683 [2024-12-09 10:08:04.519201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.683 [2024-12-09 10:08:04.519209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.683 [2024-12-09 10:08:04.530667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.683 [2024-12-09 10:08:04.530698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.683 [2024-12-09 10:08:04.530722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.683 [2024-12-09 10:08:04.539942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.683 [2024-12-09 10:08:04.539975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.683 [2024-12-09 10:08:04.539983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.683 [2024-12-09 10:08:04.550826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.683 [2024-12-09 10:08:04.550860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.683 [2024-12-09 10:08:04.550868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.683 [2024-12-09 10:08:04.562218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.683 [2024-12-09 10:08:04.562252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.683 [2024-12-09 10:08:04.562259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.684 [2024-12-09 10:08:04.572967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.684 [2024-12-09 10:08:04.573001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.684 [2024-12-09 10:08:04.573009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.684 [2024-12-09 10:08:04.583995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.684 [2024-12-09 10:08:04.584030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.684 [2024-12-09 10:08:04.584054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.684 [2024-12-09 10:08:04.595014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.684 [2024-12-09 10:08:04.595047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.684 [2024-12-09 10:08:04.595054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.684 [2024-12-09 10:08:04.606648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.684 [2024-12-09 10:08:04.606683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.684 [2024-12-09 10:08:04.606692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.684 [2024-12-09 10:08:04.618063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.684 [2024-12-09 10:08:04.618098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.684 [2024-12-09 10:08:04.618106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.684 [2024-12-09 10:08:04.629070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.684 [2024-12-09 10:08:04.629112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.684 [2024-12-09 10:08:04.629121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.684 [2024-12-09 10:08:04.640486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.684 [2024-12-09 10:08:04.640522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.684 [2024-12-09 10:08:04.640531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.684 [2024-12-09 10:08:04.652514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.684 [2024-12-09 10:08:04.652552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.684 [2024-12-09 10:08:04.652561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.684 [2024-12-09 10:08:04.664073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.684 [2024-12-09 10:08:04.664108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.684 [2024-12-09 10:08:04.664116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.684 [2024-12-09 10:08:04.674607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.684 [2024-12-09 10:08:04.674640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.684 [2024-12-09 10:08:04.674648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.684 [2024-12-09 10:08:04.685125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25012d0) 00:56:57.684 [2024-12-09 10:08:04.685157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:57.684 [2024-12-09 10:08:04.685165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:57.684 23216.50 IOPS, 90.69 MiB/s 00:56:57.684 Latency(us) 00:56:57.684 [2024-12-09T10:08:04.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:57.684 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:56:57.684 nvme0n1 : 2.00 23236.56 90.77 0.00 0.00 5503.75 2847.52 14767.06 00:56:57.684 [2024-12-09T10:08:04.728Z] =================================================================================================================== 00:56:57.684 [2024-12-09T10:08:04.728Z] Total : 23236.56 90.77 0.00 0.00 5503.75 2847.52 14767.06 00:56:57.684 { 00:56:57.684 "results": [ 00:56:57.684 { 00:56:57.684 "job": "nvme0n1", 00:56:57.684 "core_mask": "0x2", 00:56:57.684 "workload": "randread", 00:56:57.684 "status": "finished", 00:56:57.684 "queue_depth": 128, 00:56:57.684 "io_size": 4096, 00:56:57.684 "runtime": 2.003782, 00:56:57.684 "iops": 23236.559665672215, 00:56:57.684 "mibps": 90.76781119403209, 00:56:57.684 "io_failed": 0, 00:56:57.684 "io_timeout": 0, 00:56:57.684 "avg_latency_us": 5503.750947590094, 00:56:57.684 "min_latency_us": 2847.5248908296944, 00:56:57.684 "max_latency_us": 14767.063755458516 00:56:57.684 } 00:56:57.684 ], 00:56:57.684 "core_count": 1 00:56:57.684 } 00:56:57.684 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:56:57.684 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:56:57.684 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:56:57.684 | .driver_specific 00:56:57.684 | .nvme_error 00:56:57.684 | .status_code 00:56:57.684 | .command_transient_transport_error' 00:56:57.684 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:56:57.943 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 182 > 0 )) 00:56:57.943 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95284 00:56:57.943 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95284 ']' 00:56:57.943 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95284 00:56:57.943 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:56:57.943 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:57.943 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95284 00:56:57.943 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:56:57.943 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:56:57.943 killing process with pid 95284 00:56:57.943 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95284' 00:56:57.943 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95284 00:56:57.943 Received shutdown signal, test time was about 2.000000 seconds 00:56:57.943 00:56:57.943 Latency(us) 00:56:57.943 [2024-12-09T10:08:04.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:57.943 [2024-12-09T10:08:04.987Z] =================================================================================================================== 00:56:57.943 [2024-12-09T10:08:04.987Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:56:57.944 10:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95284 00:56:58.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95369 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95369 /var/tmp/bperf.sock 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95369 ']' 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:58.210 10:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:56:58.210 I/O size of 131072 is greater than zero copy threshold (65536). 00:56:58.210 Zero copy mechanism will not be used. 00:56:58.210 [2024-12-09 10:08:05.185978] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:56:58.210 [2024-12-09 10:08:05.186048] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95369 ] 00:56:58.469 [2024-12-09 10:08:05.333717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:58.469 [2024-12-09 10:08:05.385519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:59.407 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:59.407 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:56:59.407 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:56:59.407 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:56:59.407 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:56:59.407 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:59.407 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:56:59.407 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:59.407 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:56:59.407 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:56:59.667 nvme0n1 00:56:59.667 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:56:59.667 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:59.667 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:56:59.667 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:59.667 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:56:59.667 10:08:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:56:59.928 I/O size of 131072 is greater than zero copy threshold (65536). 00:56:59.928 Zero copy mechanism will not be used. 00:56:59.928 Running I/O for 2 seconds... 00:56:59.928 [2024-12-09 10:08:06.734106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.928 [2024-12-09 10:08:06.734162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.928 [2024-12-09 10:08:06.734172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.928 [2024-12-09 10:08:06.737019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.928 [2024-12-09 10:08:06.737073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.928 [2024-12-09 10:08:06.737083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.928 [2024-12-09 10:08:06.740292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.928 [2024-12-09 10:08:06.740330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.740338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.743915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.743953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.743961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.746482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.746538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.746546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.749776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.749813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.749837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.752966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.753009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.753016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.755369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.755402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.755410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.758743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.758777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.758784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.762441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.762500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.762508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.766007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.766048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.766057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.768282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.768320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.768328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.771546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.771598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.771608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.775248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.775279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.775322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.777534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.777565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.777572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.781112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.781159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.781182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.784521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.784555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.784562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.788024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.788059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.788067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.791061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.791092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.791116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.793658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.793719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.793727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.796157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.796193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.796201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.799168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.799201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.799208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.802166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.802211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.802218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.804589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.804623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.804630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.807767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.807802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.807810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.811148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.811181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.811205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.814595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.814627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.814650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.818033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.818067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.818075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.821364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.821398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.929 [2024-12-09 10:08:06.821422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.929 [2024-12-09 10:08:06.824705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.929 [2024-12-09 10:08:06.824740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.824748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.828083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.828118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.828126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.831521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.831556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.831563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.834886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.834917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.834924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.838127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.838161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.838168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.841828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.841864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.841872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.845445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.845490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.845498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.849032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.849067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.849075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.852559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.852597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.852605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.855932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.855972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.855996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.859642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.859678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.859686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.863248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.863280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.863294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.866891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.866924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.866931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.870443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.870500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.870508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.874075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.874110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.874133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.877576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.877609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.877632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.880975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.881009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.881033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.884349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.884385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.884393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.887793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.887827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.887836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.891167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.891199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.891222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.894448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.894489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.894497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.897701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.897735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.897758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.901063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.901098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.901106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.904485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.904531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.904538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.908036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.908073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.908081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.911500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.911536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.911544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.914883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.930 [2024-12-09 10:08:06.914914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.930 [2024-12-09 10:08:06.914922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.930 [2024-12-09 10:08:06.918367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.918401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.918409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.921903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.921940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.921947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.925327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.925363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.925371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.928905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.928941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.928949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.932421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.932459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.932467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.935898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.935933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.935956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.939501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.939533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.939540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.942838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.942869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.942876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.946106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.946140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.946163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.949373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.949408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.949415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.952979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.953014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.953038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.956863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.956899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.956924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.960617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.960653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.960661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.964520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.964553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.964562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:59.931 [2024-12-09 10:08:06.968299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:56:59.931 [2024-12-09 10:08:06.968340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:59.931 [2024-12-09 10:08:06.968348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.193 [2024-12-09 10:08:06.972145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.193 [2024-12-09 10:08:06.972179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.193 [2024-12-09 10:08:06.972188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.193 [2024-12-09 10:08:06.975873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.193 [2024-12-09 10:08:06.975908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.193 [2024-12-09 10:08:06.975916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.193 [2024-12-09 10:08:06.979418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.193 [2024-12-09 10:08:06.979456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.193 [2024-12-09 10:08:06.979465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.193 [2024-12-09 10:08:06.983232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.193 [2024-12-09 10:08:06.983265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.193 [2024-12-09 10:08:06.983273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.193 [2024-12-09 10:08:06.987010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.193 [2024-12-09 10:08:06.987045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.193 [2024-12-09 10:08:06.987053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.193 [2024-12-09 10:08:06.990729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.193 [2024-12-09 10:08:06.990763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.193 [2024-12-09 10:08:06.990771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.193 [2024-12-09 10:08:06.994409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.193 [2024-12-09 10:08:06.994443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.193 [2024-12-09 10:08:06.994451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.193 [2024-12-09 10:08:06.997804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.193 [2024-12-09 10:08:06.997837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.193 [2024-12-09 10:08:06.997846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.193 [2024-12-09 10:08:07.000006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.193 [2024-12-09 10:08:07.000041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.193 [2024-12-09 10:08:07.000049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.193 [2024-12-09 10:08:07.003672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.193 [2024-12-09 10:08:07.003706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.193 [2024-12-09 10:08:07.003714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.193 [2024-12-09 10:08:07.007145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.193 [2024-12-09 10:08:07.007179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.193 [2024-12-09 10:08:07.007187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.009436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.009558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.009568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.012257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.012288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.012296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.015127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.015158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.015166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.017692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.017724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.017732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.020707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.020741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.020749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.024044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.024078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.024085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.026361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.026432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.026441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.029847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.029880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.029889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.033466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.033509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.033518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.037037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.037071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.037080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.039791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.039822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.039830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.043016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.043051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.043060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.046709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.046744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.046752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.049417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.049450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.049458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.052349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.052383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.052391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.054899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.054930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.054937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.057652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.057683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.057691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.060469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.060517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.060526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.063005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.063078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.063087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.066154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.066187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.066195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.068984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.069016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.069025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.071696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.071795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.071805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.074660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.074692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.074700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.077325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.077359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.077367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.080139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.080172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.080180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.082417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.082451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.082459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.085396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.194 [2024-12-09 10:08:07.085498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.194 [2024-12-09 10:08:07.085523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.194 [2024-12-09 10:08:07.088642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.088677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.088685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.091263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.091304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.091313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.094154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.094188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.094196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.097458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.097553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.097564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.101298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.101336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.101347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.104048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.104136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.104211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.107517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.107550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.107558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.111116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.111148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.111156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.114767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.114800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.114808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.118331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.118411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.118420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.121997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.122030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.122038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.125499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.125613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.125648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.129096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.129176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.129209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.132739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.132819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.132852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.136445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.136533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.136571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.139871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.139950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.139998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.143551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.143636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.143678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.147294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.147389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.147430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.150601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.150674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.150712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.154154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.154232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.154270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.157636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.157731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.157766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.161200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.161279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.161311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.164778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.164854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.164886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.168269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.168349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.168384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.171754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.171832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.171865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.175264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.175365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.175402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.178815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.178892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.178930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.195 [2024-12-09 10:08:07.182275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.195 [2024-12-09 10:08:07.182354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.195 [2024-12-09 10:08:07.182392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.185818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.185896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.185929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.189338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.189419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.189451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.192891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.192971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.193003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.195461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.195546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.195584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.198731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.198811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.198851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.202436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.202526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.202583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.206194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.206275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.206313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.210418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.210519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.210558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.214638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.214728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.214772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.219018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.219100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.219111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.223230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.223269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.223279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.227338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.227374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.227383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.196 [2024-12-09 10:08:07.231261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.196 [2024-12-09 10:08:07.231306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.196 [2024-12-09 10:08:07.231316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.235503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.235537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.235547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.239651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.239690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.239700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.243742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.243780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.243791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.247599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.247635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.247644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.251616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.251651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.251661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.254905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.254937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.254946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.258312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.258345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.258353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.260812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.260863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.260873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.264430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.264469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.264500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.268343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.268380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.268390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.270725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.270757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.270766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.274281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.274317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.274326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.277247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.277280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.277288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.280806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.280933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.280943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.284863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.284950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.284959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.288530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.288564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.288572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.291083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.291115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.291124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.294861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.294899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.294908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.298564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.298594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.298603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.302188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.302220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.458 [2024-12-09 10:08:07.302228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.458 [2024-12-09 10:08:07.304814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.458 [2024-12-09 10:08:07.304843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.304851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.308026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.308061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.308069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.311843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.311878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.311887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.315461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.315506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.315515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.318044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.318075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.318083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.321485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.321574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.321583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.325308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.325394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.325404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.329317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.329401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.329411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.333128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.333212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.333222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.337556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.337596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.337606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.341654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.341690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.341698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.345684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.345725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.345735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.349803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.349840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.349850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.353595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.353632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.353641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.356390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.356430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.356451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.360048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.360088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.360098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.364164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.364203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.364213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.368329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.368368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.368378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.370906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.370943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.370953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.375079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.375118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.375127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.379115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.379153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.379163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.383213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.383248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.383257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.387113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.387149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.387157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.390743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.390777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.390785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.394295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.394329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.394337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.398042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.398074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.398082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.401807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.401839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.459 [2024-12-09 10:08:07.401848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.459 [2024-12-09 10:08:07.405473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.459 [2024-12-09 10:08:07.405565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.405575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.409130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.409165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.409173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.412762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.412861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.412896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.416433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.416539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.416573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.420050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.420134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.420172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.423433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.423537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.423574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.426959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.427035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.427067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.430520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.430595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.430627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.434158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.434239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.434272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.437908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.437988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.438021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.441561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.441641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.441673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.445148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.445229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.445262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.448853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.448946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.448981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.452576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.452657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.452691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.456331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.456414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.456450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.460307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.460391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.460448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.464346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.464430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.464482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.467971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.468045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.468054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.471590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.471625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.471635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.475215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.475251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.475259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.479048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.479084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.479092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.483223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.483261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.483270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.487225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.487262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.460 [2024-12-09 10:08:07.487271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.460 [2024-12-09 10:08:07.490911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.460 [2024-12-09 10:08:07.490944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.461 [2024-12-09 10:08:07.490952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.461 [2024-12-09 10:08:07.494506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.461 [2024-12-09 10:08:07.494539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.461 [2024-12-09 10:08:07.494548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.461 [2024-12-09 10:08:07.498114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.461 [2024-12-09 10:08:07.498149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.461 [2024-12-09 10:08:07.498157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.723 [2024-12-09 10:08:07.501804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.723 [2024-12-09 10:08:07.501837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.723 [2024-12-09 10:08:07.501846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.723 [2024-12-09 10:08:07.505369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.723 [2024-12-09 10:08:07.505481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.723 [2024-12-09 10:08:07.505491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.723 [2024-12-09 10:08:07.509031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.723 [2024-12-09 10:08:07.509069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.723 [2024-12-09 10:08:07.509077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.723 [2024-12-09 10:08:07.512872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.723 [2024-12-09 10:08:07.512961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.723 [2024-12-09 10:08:07.512996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.723 [2024-12-09 10:08:07.516595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.723 [2024-12-09 10:08:07.516693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.723 [2024-12-09 10:08:07.516728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.723 [2024-12-09 10:08:07.520329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.723 [2024-12-09 10:08:07.520420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.723 [2024-12-09 10:08:07.520474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.723 [2024-12-09 10:08:07.524144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.723 [2024-12-09 10:08:07.524227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.723 [2024-12-09 10:08:07.524261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.723 [2024-12-09 10:08:07.527793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.723 [2024-12-09 10:08:07.527867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.723 [2024-12-09 10:08:07.527876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.723 [2024-12-09 10:08:07.531611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.723 [2024-12-09 10:08:07.531726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.723 [2024-12-09 10:08:07.531775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.723 [2024-12-09 10:08:07.535483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.723 [2024-12-09 10:08:07.535585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.723 [2024-12-09 10:08:07.535629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.723 [2024-12-09 10:08:07.539123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.539204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.539260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.543015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.543112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.543147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.546919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.547014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.547048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.550741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.550824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.550859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.554294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.554389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.554423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.557960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.558041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.558073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.561546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.561625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.561658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.565219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.565303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.565337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.568618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.568699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.568731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.572188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.572284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.572317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.575904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.575988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.576029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.579539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.579618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.579666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.583167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.583233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.583243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.586589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.586622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.586630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.590032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.590066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.590073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.593571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.593602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.593611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.597180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.597214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.597222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.600742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.600776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.600785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.604269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.604361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.604370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.607810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.607847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.607856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.724 [2024-12-09 10:08:07.611325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.724 [2024-12-09 10:08:07.611360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.724 [2024-12-09 10:08:07.611368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.614894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.614927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.614935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.618493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.618523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.618531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.622023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.622056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.622064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.625542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.625575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.625582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.629116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.629150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.629158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.632694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.632728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.632735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.636322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.636418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.636428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.639915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.639951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.639959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.643379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.643413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.643420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.646975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.647009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.647017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.650529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.650560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.650568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.654090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.654123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.654132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.657741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.657774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.657781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.661481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.661530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.661539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.665127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.665161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.665169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.668948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.668985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.668994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.672841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.672877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.672886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.676262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.676356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.676366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.678988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.679023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.679032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.682685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.682720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.682728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.686536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.686570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.686578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.690145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.690178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.690187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.692456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.692587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.692598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.696364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.696449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.696460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.700426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.700539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.700549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.704381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.704463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.725 [2024-12-09 10:08:07.704492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.725 [2024-12-09 10:08:07.706540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.725 [2024-12-09 10:08:07.706569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.706576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.710021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.710053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.710061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.713420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.713453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.713461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.716811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.716845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.716852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.720059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.720135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.720145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.723594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.723628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.723636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.726 8836.00 IOPS, 1104.50 MiB/s [2024-12-09T10:08:07.770Z] [2024-12-09 10:08:07.728463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.728508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.728518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.732126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.732160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.732168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.735618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.735649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.735658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.739337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.739413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.739422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.742809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.742841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.742850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.746565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.746601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.746609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.750487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.750533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.750542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.754416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.754451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.754459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.758104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.758140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.758148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.726 [2024-12-09 10:08:07.761778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.726 [2024-12-09 10:08:07.761812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.726 [2024-12-09 10:08:07.761820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.987 [2024-12-09 10:08:07.765488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.987 [2024-12-09 10:08:07.765537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.987 [2024-12-09 10:08:07.765546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.987 [2024-12-09 10:08:07.768949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.987 [2024-12-09 10:08:07.768982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.987 [2024-12-09 10:08:07.768990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.987 [2024-12-09 10:08:07.772541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.987 [2024-12-09 10:08:07.772575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.987 [2024-12-09 10:08:07.772583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.987 [2024-12-09 10:08:07.776113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.987 [2024-12-09 10:08:07.776146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.987 [2024-12-09 10:08:07.776154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.987 [2024-12-09 10:08:07.780026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.987 [2024-12-09 10:08:07.780065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.987 [2024-12-09 10:08:07.780075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.987 [2024-12-09 10:08:07.784334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.987 [2024-12-09 10:08:07.784372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.987 [2024-12-09 10:08:07.784381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.987 [2024-12-09 10:08:07.788250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.987 [2024-12-09 10:08:07.788289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.987 [2024-12-09 10:08:07.788299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.987 [2024-12-09 10:08:07.792279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.987 [2024-12-09 10:08:07.792315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.987 [2024-12-09 10:08:07.792325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.987 [2024-12-09 10:08:07.795883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.795917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.795925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.799604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.799636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.799645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.803225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.803324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.803334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.806966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.807000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.807009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.810523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.810603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.810639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.814216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.814299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.814335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.817967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.818046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.818080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.821699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.821778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.821813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.825416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.825502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.825537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.828933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.829006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.829015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.832530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.832569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.832577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.836090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.836133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.836142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.839766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.839808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.839817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.843167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.843278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.843295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.846614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.846645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.846653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.850168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.850201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.850209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.853776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.853810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.853818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.857343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.857379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.857387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.860897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.860931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.860939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.864374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.864407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.864415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.867938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.867973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.867981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.871525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.871557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.871565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.875169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.875202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.875210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.878760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.878842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.878876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.882370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.882450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.882495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.885864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.885943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.885976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.889504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.889535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.988 [2024-12-09 10:08:07.889543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.988 [2024-12-09 10:08:07.893055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.988 [2024-12-09 10:08:07.893089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.893098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.896601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.896639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.896647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.900127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.900172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.900181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.903776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.903818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.903827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.907568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.907608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.907616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.911335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.911455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.911466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.914976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.915018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.915025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.918487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.918582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.918616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.922096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.922176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.922210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.925458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.925546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.925580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.929107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.929184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.929218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.932759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.932833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.932867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.936165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.936251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.936261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.939769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.939804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.939813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.943412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.943502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.943512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.946776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.946807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.946815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.950264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.950297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.950305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.953669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.953701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.953709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.957157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.957192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.957199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.960638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.960669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.960677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.964050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.964082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.964091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.967531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.967563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.967571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.971029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.971062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.971070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.974529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.974626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.974665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.978235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.978314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.978348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.981894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.981989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.982025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.985765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.985846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.989 [2024-12-09 10:08:07.985881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.989 [2024-12-09 10:08:07.989428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.989 [2024-12-09 10:08:07.989510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.990 [2024-12-09 10:08:07.989520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.990 [2024-12-09 10:08:07.993340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.990 [2024-12-09 10:08:07.993386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.990 [2024-12-09 10:08:07.993394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.990 [2024-12-09 10:08:07.996969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.990 [2024-12-09 10:08:07.997003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.990 [2024-12-09 10:08:07.997011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.990 [2024-12-09 10:08:08.000441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.990 [2024-12-09 10:08:08.000487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.990 [2024-12-09 10:08:08.000495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.990 [2024-12-09 10:08:08.003990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.990 [2024-12-09 10:08:08.004027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.990 [2024-12-09 10:08:08.004035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.990 [2024-12-09 10:08:08.007587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.990 [2024-12-09 10:08:08.007619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.990 [2024-12-09 10:08:08.007628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.990 [2024-12-09 10:08:08.011412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.990 [2024-12-09 10:08:08.011534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.990 [2024-12-09 10:08:08.011546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:00.990 [2024-12-09 10:08:08.014986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.990 [2024-12-09 10:08:08.015019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.990 [2024-12-09 10:08:08.015027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:00.990 [2024-12-09 10:08:08.018950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.990 [2024-12-09 10:08:08.019048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.990 [2024-12-09 10:08:08.019089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:00.990 [2024-12-09 10:08:08.022923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.990 [2024-12-09 10:08:08.023005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.990 [2024-12-09 10:08:08.023041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:00.990 [2024-12-09 10:08:08.026743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:00.990 [2024-12-09 10:08:08.026838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:00.990 [2024-12-09 10:08:08.026873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.030501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.030578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.030613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.034274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.034357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.034392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.038375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.038458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.038468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.042180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.042218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.042227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.045795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.045830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.045838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.049348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.049384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.049392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.053054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.053090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.053098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.056767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.056803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.056811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.059308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.059420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.059431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.062464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.062507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.062514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.066131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.066166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.066174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.069736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.069770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.069778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.072344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.072378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.072386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.075383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.075465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.075488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.078992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.079024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.079032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.082466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.082559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.082594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.084986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.085065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.085101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.088926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.089021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.089057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.092780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.092861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.092897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.095469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.095556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.095599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.098710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.098737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.098745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.102310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.102344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.253 [2024-12-09 10:08:08.102352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.253 [2024-12-09 10:08:08.105908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.253 [2024-12-09 10:08:08.105942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.105951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.108508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.108539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.108546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.111603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.111635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.111643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.115213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.115245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.115253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.118743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.118776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.118785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.121018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.121051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.121059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.124739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.124772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.124780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.128269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.128360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.128370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.132007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.132042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.132050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.135639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.135673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.135681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.139344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.139382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.139391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.143026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.143060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.143068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.146324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.146359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.146368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.150098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.150131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.150139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.153454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.153493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.153519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.157102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.157135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.157143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.160986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.161021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.161029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.164778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.164833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.164843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.168355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.168449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.168459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.172104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.172185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.172195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.175909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.176000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.176010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.179754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.179840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.179850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.183265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.183308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.183333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.186883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.186915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.186923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.190307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.190341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.190349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.193932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.193966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.193974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.197484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.197514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.197522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.200869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.200901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.200909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.204260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.204350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.204360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.207828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.207862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.207870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.211420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.211457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.211466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.214798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.214832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.214840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.218170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.218210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.218218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.222035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.222071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.222079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.225688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.225724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.225732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.229165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.229200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.229208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.232645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.232678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.232685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.236231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.236329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.236339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.239916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.239951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.239959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.243415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.243523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.243563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.247093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.247178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.247211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.250642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.250728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.250763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.254269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.254366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.254401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.257916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.257995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.258029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.254 [2024-12-09 10:08:08.261492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.254 [2024-12-09 10:08:08.261566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.254 [2024-12-09 10:08:08.261590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.255 [2024-12-09 10:08:08.264994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.255 [2024-12-09 10:08:08.265032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.255 [2024-12-09 10:08:08.265041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.255 [2024-12-09 10:08:08.268745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.255 [2024-12-09 10:08:08.268802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.255 [2024-12-09 10:08:08.268811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.255 [2024-12-09 10:08:08.272388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.255 [2024-12-09 10:08:08.272501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.255 [2024-12-09 10:08:08.272511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.255 [2024-12-09 10:08:08.275955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.255 [2024-12-09 10:08:08.275992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.255 [2024-12-09 10:08:08.276000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.255 [2024-12-09 10:08:08.279632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.255 [2024-12-09 10:08:08.279719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.255 [2024-12-09 10:08:08.279754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.255 [2024-12-09 10:08:08.283213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.255 [2024-12-09 10:08:08.283295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.255 [2024-12-09 10:08:08.283330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.255 [2024-12-09 10:08:08.286803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.255 [2024-12-09 10:08:08.286880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.255 [2024-12-09 10:08:08.286913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.255 [2024-12-09 10:08:08.290367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.255 [2024-12-09 10:08:08.290446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.255 [2024-12-09 10:08:08.290490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.516 [2024-12-09 10:08:08.293835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.516 [2024-12-09 10:08:08.293912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.516 [2024-12-09 10:08:08.293945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.516 [2024-12-09 10:08:08.297519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.516 [2024-12-09 10:08:08.297594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.516 [2024-12-09 10:08:08.297605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.516 [2024-12-09 10:08:08.301043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.516 [2024-12-09 10:08:08.301076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.516 [2024-12-09 10:08:08.301084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.516 [2024-12-09 10:08:08.304582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.516 [2024-12-09 10:08:08.304615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.516 [2024-12-09 10:08:08.304623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.516 [2024-12-09 10:08:08.308014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.516 [2024-12-09 10:08:08.308115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.516 [2024-12-09 10:08:08.308125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.516 [2024-12-09 10:08:08.311559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.516 [2024-12-09 10:08:08.311592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.311600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.315056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.315090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.315098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.318636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.318670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.318678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.322221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.322257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.322266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.325757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.325790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.325798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.329157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.329192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.329200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.332670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.332702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.332709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.336222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.336328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.336337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.339846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.339881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.339889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.343399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.343434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.343442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.346910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.346944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.346952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.350449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.350491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.350500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.353871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.353905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.353912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.357683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.357716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.357725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.361524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.361559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.361568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.365305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.365339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.365347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.369060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.369092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.369100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.372723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.372776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.372785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.376555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.376591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.376600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.380592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.380630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.380640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.384397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.384523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.384534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.388322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.388423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.388433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.392635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.392670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.392678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.396716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.396752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.396760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.400586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.400621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.400629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.404466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.404512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.404521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.408289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.408393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.408402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.517 [2024-12-09 10:08:08.411976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.517 [2024-12-09 10:08:08.412013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.517 [2024-12-09 10:08:08.412022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.415680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.415766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.415802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.419769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.419861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.419902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.423954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.424045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.424089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.428022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.428111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.428150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.431882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.431960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.431971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.435701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.435799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.435839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.439555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.439635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.439689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.443416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.443509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.443550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.446911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.446990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.447032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.450731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.450812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.450871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.454607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.454643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.454651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.458263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.458297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.458306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.461884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.461919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.461927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.465628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.465662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.465670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.469471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.469578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.469587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.473200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.473234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.473243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.476796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.476829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.476838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.480410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.480503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.480513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.484082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.484116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.484124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.487602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.487699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.487736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.491145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.491221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.491254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.494705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.494780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.494814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.498285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.498364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.498398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.501826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.501859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.501866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.505272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.505307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.505315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.508842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.508873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.508881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.512195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.518 [2024-12-09 10:08:08.512294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.518 [2024-12-09 10:08:08.512304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.518 [2024-12-09 10:08:08.515795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.519 [2024-12-09 10:08:08.515831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.519 [2024-12-09 10:08:08.515840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.519 [2024-12-09 10:08:08.519351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.519 [2024-12-09 10:08:08.519385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.519 [2024-12-09 10:08:08.519394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.519 [2024-12-09 10:08:08.522814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.519 [2024-12-09 10:08:08.522848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.519 [2024-12-09 10:08:08.522856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.519 [2024-12-09 10:08:08.526279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.519 [2024-12-09 10:08:08.526313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.519 [2024-12-09 10:08:08.526321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.519 [2024-12-09 10:08:08.529790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.519 [2024-12-09 10:08:08.529825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.519 [2024-12-09 10:08:08.529833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.519 [2024-12-09 10:08:08.533294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.519 [2024-12-09 10:08:08.533328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.519 [2024-12-09 10:08:08.533336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.519 [2024-12-09 10:08:08.536748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.519 [2024-12-09 10:08:08.536779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.519 [2024-12-09 10:08:08.536786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.519 [2024-12-09 10:08:08.540173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.519 [2024-12-09 10:08:08.540261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.519 [2024-12-09 10:08:08.540270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.519 [2024-12-09 10:08:08.543795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.519 [2024-12-09 10:08:08.543830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.519 [2024-12-09 10:08:08.543838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.519 [2024-12-09 10:08:08.547310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.519 [2024-12-09 10:08:08.547343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.519 [2024-12-09 10:08:08.547351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.519 [2024-12-09 10:08:08.550801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.519 [2024-12-09 10:08:08.550834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.519 [2024-12-09 10:08:08.550842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.519 [2024-12-09 10:08:08.554159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.519 [2024-12-09 10:08:08.554194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.519 [2024-12-09 10:08:08.554202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.780 [2024-12-09 10:08:08.557703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.780 [2024-12-09 10:08:08.557735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.780 [2024-12-09 10:08:08.557743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.780 [2024-12-09 10:08:08.561356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.780 [2024-12-09 10:08:08.561391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.780 [2024-12-09 10:08:08.561399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.780 [2024-12-09 10:08:08.565009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.780 [2024-12-09 10:08:08.565042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.565050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.568540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.568571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.568579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.571995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.572029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.572037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.575501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.575595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.575630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.579006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.579081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.579115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.582605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.582683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.582717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.586085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.586162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.586195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.589566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.589644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.589677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.592927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.592995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.593004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.596312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.596392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.596401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.599962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.599996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.600004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.603392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.603495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.603534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.606940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.607015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.607049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.610532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.610608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.610641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.613939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.614016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.614050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.617435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.617523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.617557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.620749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.620828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.620852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.624195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.624274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.624283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.627725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.627757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.627765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.631182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.631215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.631223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.634579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.634612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.634620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.638007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.638039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.638047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.641266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.641299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.641307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.644658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.644692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.644699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.648036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.648069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.648077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.651552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.651635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.651670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.655142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.655218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.781 [2024-12-09 10:08:08.655252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.781 [2024-12-09 10:08:08.658693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.781 [2024-12-09 10:08:08.658768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.658801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.662310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.662387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.662421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.665690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.665765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.665775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.669266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.669301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.669309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.672922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.672958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.672968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.676905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.676943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.676952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.680570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.680604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.680614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.684554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.684608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.684619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.688756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.688792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.688802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.692785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.692822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.692832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.696941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.696979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.696988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.700966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.701004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.701013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.704966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.705002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.705011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.708847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.708883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.708891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.712639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.712674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.712682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.716199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.716307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.716317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:01.782 [2024-12-09 10:08:08.719820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.719854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.719862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:01.782 8710.00 IOPS, 1088.75 MiB/s [2024-12-09T10:08:08.826Z] [2024-12-09 10:08:08.724807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbbdd0) 00:57:01.782 [2024-12-09 10:08:08.724840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:01.782 [2024-12-09 10:08:08.724849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:01.782 00:57:01.782 Latency(us) 00:57:01.782 [2024-12-09T10:08:08.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:01.782 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:57:01.782 nvme0n1 : 2.00 8707.82 1088.48 0.00 0.00 1834.51 490.09 9787.47 00:57:01.782 [2024-12-09T10:08:08.826Z] =================================================================================================================== 00:57:01.782 [2024-12-09T10:08:08.826Z] Total : 8707.82 1088.48 0.00 0.00 1834.51 490.09 9787.47 00:57:01.782 { 00:57:01.782 "results": [ 00:57:01.782 { 00:57:01.782 "job": "nvme0n1", 00:57:01.782 "core_mask": "0x2", 00:57:01.782 "workload": "randread", 00:57:01.782 "status": "finished", 00:57:01.782 "queue_depth": 16, 00:57:01.782 "io_size": 131072, 00:57:01.782 "runtime": 2.002339, 00:57:01.782 "iops": 8707.816208943641, 00:57:01.782 "mibps": 1088.4770261179551, 00:57:01.782 "io_failed": 0, 00:57:01.782 "io_timeout": 0, 00:57:01.782 "avg_latency_us": 1834.506456851307, 00:57:01.782 "min_latency_us": 490.0890829694323, 00:57:01.782 "max_latency_us": 9787.47248908297 00:57:01.782 } 00:57:01.782 ], 00:57:01.782 "core_count": 1 00:57:01.782 } 00:57:01.782 10:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:57:01.782 10:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:57:01.782 10:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:57:01.782 10:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:57:01.782 | .driver_specific 00:57:01.782 | .nvme_error 00:57:01.782 | .status_code 00:57:01.782 | .command_transient_transport_error' 00:57:02.042 10:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 563 > 0 )) 00:57:02.042 10:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95369 00:57:02.042 10:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95369 ']' 00:57:02.042 10:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95369 00:57:02.042 10:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:57:02.042 10:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:02.042 10:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95369 00:57:02.042 killing process with pid 95369 00:57:02.042 Received shutdown signal, test time was about 2.000000 seconds 00:57:02.042 00:57:02.042 Latency(us) 00:57:02.042 [2024-12-09T10:08:09.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:02.042 [2024-12-09T10:08:09.086Z] =================================================================================================================== 00:57:02.042 [2024-12-09T10:08:09.086Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:57:02.042 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:57:02.042 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:57:02.042 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95369' 00:57:02.042 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95369 00:57:02.042 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95369 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95459 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95459 /var/tmp/bperf.sock 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95459 ']' 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:57:02.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:02.302 10:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:57:02.302 [2024-12-09 10:08:09.232926] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:57:02.302 [2024-12-09 10:08:09.233424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95459 ] 00:57:02.561 [2024-12-09 10:08:09.374308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:02.561 [2024-12-09 10:08:09.426872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:03.500 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:03.500 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:57:03.500 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:57:03.500 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:57:03.500 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:57:03.500 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:03.500 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:57:03.500 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:03.500 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:57:03.500 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:57:03.760 nvme0n1 00:57:03.760 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:57:03.760 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:03.760 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:57:03.760 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:03.760 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:57:03.760 10:08:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:57:03.760 Running I/O for 2 seconds... 00:57:04.020 [2024-12-09 10:08:10.813627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef2948 00:57:04.020 [2024-12-09 10:08:10.814681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.814940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.825334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee1b48 00:57:04.020 [2024-12-09 10:08:10.827026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.827110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.832345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee4578 00:57:04.020 [2024-12-09 10:08:10.833193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.833256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.843626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef5378 00:57:04.020 [2024-12-09 10:08:10.844900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.845014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.852339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef8e88 00:57:04.020 [2024-12-09 10:08:10.853380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.853531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.860915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef0788 00:57:04.020 [2024-12-09 10:08:10.862002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.862109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.871693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016edf988 00:57:04.020 [2024-12-09 10:08:10.873274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.873387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.878550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee6738 00:57:04.020 [2024-12-09 10:08:10.879484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.879623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.890438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efe720 00:57:04.020 [2024-12-09 10:08:10.891809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.891937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.899225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6cc8 00:57:04.020 [2024-12-09 10:08:10.900514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.900657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.908902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eee5c8 00:57:04.020 [2024-12-09 10:08:10.910100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.910169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.917815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee01f8 00:57:04.020 [2024-12-09 10:08:10.918629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.918705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.926560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee8088 00:57:04.020 [2024-12-09 10:08:10.927451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.020 [2024-12-09 10:08:10.927541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:57:04.020 [2024-12-09 10:08:10.935503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef4f40 00:57:04.021 [2024-12-09 10:08:10.936024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:10.936099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:10.945485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6890 00:57:04.021 [2024-12-09 10:08:10.946639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:10.946712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:10.953696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee5658 00:57:04.021 [2024-12-09 10:08:10.954678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:10.954753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:10.962255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6890 00:57:04.021 [2024-12-09 10:08:10.963323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:10.963397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:10.971256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee4140 00:57:04.021 [2024-12-09 10:08:10.972321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:10.972397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:10.979781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee8d30 00:57:04.021 [2024-12-09 10:08:10.980643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:10.980717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:10.988506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee9168 00:57:04.021 [2024-12-09 10:08:10.989422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:10.989519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:10.999349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef7100 00:57:04.021 [2024-12-09 10:08:11.001004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:11.001075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:11.006223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef7970 00:57:04.021 [2024-12-09 10:08:11.006981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:11.007055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:11.015753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef4f40 00:57:04.021 [2024-12-09 10:08:11.016566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:11.016645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:11.027251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee7818 00:57:04.021 [2024-12-09 10:08:11.028803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:11.028904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:11.034031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016edf118 00:57:04.021 [2024-12-09 10:08:11.034665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:11.034737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:11.043202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eea680 00:57:04.021 [2024-12-09 10:08:11.043827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:11.043902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:57:04.021 [2024-12-09 10:08:11.054094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef81e0 00:57:04.021 [2024-12-09 10:08:11.055361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.021 [2024-12-09 10:08:11.055426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.062596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef5378 00:57:04.282 [2024-12-09 10:08:11.063537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.063568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.071422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee3498 00:57:04.282 [2024-12-09 10:08:11.072476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.072508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.082102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efef90 00:57:04.282 [2024-12-09 10:08:11.083620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.083652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.088575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016edfdc0 00:57:04.282 [2024-12-09 10:08:11.089392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.089418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.097732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef8618 00:57:04.282 [2024-12-09 10:08:11.098452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.098535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.109396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016edf550 00:57:04.282 [2024-12-09 10:08:11.110941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.110972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.118656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef9b30 00:57:04.282 [2024-12-09 10:08:11.120350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.120386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.127610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6458 00:57:04.282 [2024-12-09 10:08:11.128423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.128449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.138545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ede470 00:57:04.282 [2024-12-09 10:08:11.139946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.139974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.146927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef4f40 00:57:04.282 [2024-12-09 10:08:11.147845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.147876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.155579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ede470 00:57:04.282 [2024-12-09 10:08:11.156477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.156515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.166575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6458 00:57:04.282 [2024-12-09 10:08:11.168199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.168227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.173374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eebb98 00:57:04.282 [2024-12-09 10:08:11.174100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.174176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.184363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee3060 00:57:04.282 [2024-12-09 10:08:11.185616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.185647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.192454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efd640 00:57:04.282 [2024-12-09 10:08:11.193851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.193876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.200451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efe720 00:57:04.282 [2024-12-09 10:08:11.201230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.201255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.212735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efac10 00:57:04.282 [2024-12-09 10:08:11.214019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.214052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.222305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeb760 00:57:04.282 [2024-12-09 10:08:11.223544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.223575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.231574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efd208 00:57:04.282 [2024-12-09 10:08:11.232743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.232770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:57:04.282 [2024-12-09 10:08:11.240988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee95a0 00:57:04.282 [2024-12-09 10:08:11.242159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.282 [2024-12-09 10:08:11.242238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:57:04.283 [2024-12-09 10:08:11.249899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeaef0 00:57:04.283 [2024-12-09 10:08:11.250926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.283 [2024-12-09 10:08:11.251005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:57:04.283 [2024-12-09 10:08:11.258739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef31b8 00:57:04.283 [2024-12-09 10:08:11.259817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.283 [2024-12-09 10:08:11.259892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:57:04.283 [2024-12-09 10:08:11.267861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee4140 00:57:04.283 [2024-12-09 10:08:11.268941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.283 [2024-12-09 10:08:11.269013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:57:04.283 [2024-12-09 10:08:11.278272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee8088 00:57:04.283 [2024-12-09 10:08:11.279865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.283 [2024-12-09 10:08:11.279894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:57:04.283 [2024-12-09 10:08:11.284885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efb480 00:57:04.283 [2024-12-09 10:08:11.285724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.283 [2024-12-09 10:08:11.285748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:57:04.283 [2024-12-09 10:08:11.295622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee2c28 00:57:04.283 [2024-12-09 10:08:11.296985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.283 [2024-12-09 10:08:11.297016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:57:04.283 [2024-12-09 10:08:11.305022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eefae0 00:57:04.283 [2024-12-09 10:08:11.306252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.283 [2024-12-09 10:08:11.306282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:57:04.283 [2024-12-09 10:08:11.313363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016edfdc0 00:57:04.283 [2024-12-09 10:08:11.314544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.283 [2024-12-09 10:08:11.314634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:57:04.283 [2024-12-09 10:08:11.321505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeff18 00:57:04.283 [2024-12-09 10:08:11.322358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.283 [2024-12-09 10:08:11.322435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.330085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee6b70 00:57:04.544 [2024-12-09 10:08:11.330985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.331015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.340502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efa3a0 00:57:04.544 [2024-12-09 10:08:11.341865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.341893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.346742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef8618 00:57:04.544 [2024-12-09 10:08:11.347515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.347537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.357269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee0ea0 00:57:04.544 [2024-12-09 10:08:11.358432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.358461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.365392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef5378 00:57:04.544 [2024-12-09 10:08:11.366312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.366341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.373941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efc128 00:57:04.544 [2024-12-09 10:08:11.374892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.374919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.384327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efcdd0 00:57:04.544 [2024-12-09 10:08:11.385784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.385809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.390614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eebfd0 00:57:04.544 [2024-12-09 10:08:11.391329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.391377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.401007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeee38 00:57:04.544 [2024-12-09 10:08:11.402263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.402292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.409672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef20d8 00:57:04.544 [2024-12-09 10:08:11.410766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.410799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.418655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eed4e8 00:57:04.544 [2024-12-09 10:08:11.419698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.419727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.427255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efd640 00:57:04.544 [2024-12-09 10:08:11.428134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.428165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.436985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eedd58 00:57:04.544 [2024-12-09 10:08:11.437711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.437743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.447102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee0a68 00:57:04.544 [2024-12-09 10:08:11.448272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.448296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.454822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee6738 00:57:04.544 [2024-12-09 10:08:11.455406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.455448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.465760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee01f8 00:57:04.544 [2024-12-09 10:08:11.466522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.466545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.474461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eed4e8 00:57:04.544 [2024-12-09 10:08:11.475038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.475069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.483527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016edf550 00:57:04.544 [2024-12-09 10:08:11.483985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.484008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.493954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee4140 00:57:04.544 [2024-12-09 10:08:11.495091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.495126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.503180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efc560 00:57:04.544 [2024-12-09 10:08:11.504158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.504188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.511467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ede038 00:57:04.544 [2024-12-09 10:08:11.512271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.512360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:57:04.544 [2024-12-09 10:08:11.520138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef1868 00:57:04.544 [2024-12-09 10:08:11.520999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.544 [2024-12-09 10:08:11.521027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:57:04.545 [2024-12-09 10:08:11.531226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efeb58 00:57:04.545 [2024-12-09 10:08:11.532737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.545 [2024-12-09 10:08:11.532767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:57:04.545 [2024-12-09 10:08:11.537699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef8618 00:57:04.545 [2024-12-09 10:08:11.538273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.545 [2024-12-09 10:08:11.538302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:57:04.545 [2024-12-09 10:08:11.548122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef9f68 00:57:04.545 [2024-12-09 10:08:11.549235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.545 [2024-12-09 10:08:11.549265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:57:04.545 [2024-12-09 10:08:11.556251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef35f0 00:57:04.545 [2024-12-09 10:08:11.557109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.545 [2024-12-09 10:08:11.557181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:57:04.545 [2024-12-09 10:08:11.564830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eea248 00:57:04.545 [2024-12-09 10:08:11.565708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.545 [2024-12-09 10:08:11.565737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:04.545 [2024-12-09 10:08:11.575217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeb760 00:57:04.545 [2024-12-09 10:08:11.576753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.545 [2024-12-09 10:08:11.576782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:57:04.545 [2024-12-09 10:08:11.581716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eebfd0 00:57:04.545 [2024-12-09 10:08:11.582439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.545 [2024-12-09 10:08:11.582462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.592401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee27f0 00:57:04.805 [2024-12-09 10:08:11.593602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.593630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.600659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efa7d8 00:57:04.805 [2024-12-09 10:08:11.601615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.601640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.609173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeaef0 00:57:04.805 [2024-12-09 10:08:11.610016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.610046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.617880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef96f8 00:57:04.805 [2024-12-09 10:08:11.618413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.618444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.626167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efdeb0 00:57:04.805 [2024-12-09 10:08:11.626639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.626661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.636050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef96f8 00:57:04.805 [2024-12-09 10:08:11.637261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.637284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.644302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eebb98 00:57:04.805 [2024-12-09 10:08:11.645267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.645297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.652821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efd208 00:57:04.805 [2024-12-09 10:08:11.653778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.653808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.663266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efb8b8 00:57:04.805 [2024-12-09 10:08:11.664852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.664879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.669668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef4b08 00:57:04.805 [2024-12-09 10:08:11.670459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.670486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.678859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee2c28 00:57:04.805 [2024-12-09 10:08:11.679712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.679780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.689478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee12d8 00:57:04.805 [2024-12-09 10:08:11.690926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.690991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.698803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eff3c8 00:57:04.805 [2024-12-09 10:08:11.700219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.700250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.706047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efd640 00:57:04.805 [2024-12-09 10:08:11.706956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.706987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.714879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6cc8 00:57:04.805 [2024-12-09 10:08:11.715697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.715728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.723685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efc560 00:57:04.805 [2024-12-09 10:08:11.724331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.724362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.734261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eff3c8 00:57:04.805 [2024-12-09 10:08:11.735050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.735078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.743090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee4de8 00:57:04.805 [2024-12-09 10:08:11.744115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.744144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.751624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eedd58 00:57:04.805 [2024-12-09 10:08:11.752507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.752535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.762088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eef270 00:57:04.805 [2024-12-09 10:08:11.763617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.805 [2024-12-09 10:08:11.763645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:57:04.805 [2024-12-09 10:08:11.768751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee3d08 00:57:04.805 [2024-12-09 10:08:11.769634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.806 [2024-12-09 10:08:11.769655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:57:04.806 [2024-12-09 10:08:11.780185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef0788 00:57:04.806 [2024-12-09 10:08:11.781586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.806 [2024-12-09 10:08:11.781608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:57:04.806 [2024-12-09 10:08:11.786797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efbcf0 00:57:04.806 [2024-12-09 10:08:11.787418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.806 [2024-12-09 10:08:11.787441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:57:04.806 [2024-12-09 10:08:11.796109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efef90 00:57:04.806 [2024-12-09 10:08:11.797438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.806 [2024-12-09 10:08:11.797470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:57:04.806 27671.00 IOPS, 108.09 MiB/s [2024-12-09T10:08:11.850Z] [2024-12-09 10:08:11.807738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6cc8 00:57:04.806 [2024-12-09 10:08:11.808701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.806 [2024-12-09 10:08:11.808798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:57:04.806 [2024-12-09 10:08:11.819172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efac10 00:57:04.806 [2024-12-09 10:08:11.820908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.806 [2024-12-09 10:08:11.820940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:57:04.806 [2024-12-09 10:08:11.826408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeaab8 00:57:04.806 [2024-12-09 10:08:11.827254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.806 [2024-12-09 10:08:11.827280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:57:04.806 [2024-12-09 10:08:11.838244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efdeb0 00:57:04.806 [2024-12-09 10:08:11.839552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.806 [2024-12-09 10:08:11.839573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:57:04.806 [2024-12-09 10:08:11.845987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efef90 00:57:04.806 [2024-12-09 10:08:11.846579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:04.806 [2024-12-09 10:08:11.846621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.858151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee5658 00:57:05.066 [2024-12-09 10:08:11.859840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.859917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.865158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef1ca0 00:57:05.066 [2024-12-09 10:08:11.866022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.866048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.876411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee95a0 00:57:05.066 [2024-12-09 10:08:11.877725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.877749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.884827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eecc78 00:57:05.066 [2024-12-09 10:08:11.885800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.885828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.893334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eec840 00:57:05.066 [2024-12-09 10:08:11.894328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.894355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.903902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef0ff8 00:57:05.066 [2024-12-09 10:08:11.905483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.905509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.910347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef31b8 00:57:05.066 [2024-12-09 10:08:11.911119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.911150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.920864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eea680 00:57:05.066 [2024-12-09 10:08:11.922157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.922186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.928656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee12d8 00:57:05.066 [2024-12-09 10:08:11.930121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.930151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.938026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef0788 00:57:05.066 [2024-12-09 10:08:11.938913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.938936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.946530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeff18 00:57:05.066 [2024-12-09 10:08:11.947554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.947585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.955010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee38d0 00:57:05.066 [2024-12-09 10:08:11.956046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.956076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:57:05.066 [2024-12-09 10:08:11.963604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeaab8 00:57:05.066 [2024-12-09 10:08:11.964509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.066 [2024-12-09 10:08:11.964539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:11.972438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef1868 00:57:05.067 [2024-12-09 10:08:11.973282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:11.973309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:11.981225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef31b8 00:57:05.067 [2024-12-09 10:08:11.982017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:11.982045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:11.989431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee38d0 00:57:05.067 [2024-12-09 10:08:11.990115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:11.990142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:11.997892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eecc78 00:57:05.067 [2024-12-09 10:08:11.998582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:11.998610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:12.008241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ede8a8 00:57:05.067 [2024-12-09 10:08:12.009544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:12.009568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:12.016403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef2510 00:57:05.067 [2024-12-09 10:08:12.017404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:12.017436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:12.024954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee01f8 00:57:05.067 [2024-12-09 10:08:12.025930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:12.025958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:12.035243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6458 00:57:05.067 [2024-12-09 10:08:12.036863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:12.036904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:12.041728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6cc8 00:57:05.067 [2024-12-09 10:08:12.042543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:12.042564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:12.050536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eea680 00:57:05.067 [2024-12-09 10:08:12.051261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:12.051294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:12.058904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee01f8 00:57:05.067 [2024-12-09 10:08:12.059549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:12.059578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:12.069099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6020 00:57:05.067 [2024-12-09 10:08:12.070170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:12.070192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:12.078499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee0a68 00:57:05.067 [2024-12-09 10:08:12.079245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:12.079355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:12.088247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016edfdc0 00:57:05.067 [2024-12-09 10:08:12.089081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:12.089108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:57:05.067 [2024-12-09 10:08:12.097329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eec408 00:57:05.067 [2024-12-09 10:08:12.097888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.067 [2024-12-09 10:08:12.097918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.108325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeb760 00:57:05.327 [2024-12-09 10:08:12.109468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.109508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.116763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee6300 00:57:05.327 [2024-12-09 10:08:12.117806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.117836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.125073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6458 00:57:05.327 [2024-12-09 10:08:12.125951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.125980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.135499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee84c0 00:57:05.327 [2024-12-09 10:08:12.137052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.137079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.141800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efda78 00:57:05.327 [2024-12-09 10:08:12.142573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.142602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.152208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efc128 00:57:05.327 [2024-12-09 10:08:12.153495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.153549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.158482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef8618 00:57:05.327 [2024-12-09 10:08:12.159025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.159046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.169165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efb048 00:57:05.327 [2024-12-09 10:08:12.170310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.170340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.177660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeee38 00:57:05.327 [2024-12-09 10:08:12.178466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.178508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.186781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee73e0 00:57:05.327 [2024-12-09 10:08:12.187685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.187774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.197911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef4298 00:57:05.327 [2024-12-09 10:08:12.199297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.199396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.204794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee7818 00:57:05.327 [2024-12-09 10:08:12.205510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.205533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.215759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee6300 00:57:05.327 [2024-12-09 10:08:12.216986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.217052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.225018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef96f8 00:57:05.327 [2024-12-09 10:08:12.226148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.327 [2024-12-09 10:08:12.226181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:57:05.327 [2024-12-09 10:08:12.232114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efb048 00:57:05.328 [2024-12-09 10:08:12.232863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.232895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.243670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef4b08 00:57:05.328 [2024-12-09 10:08:12.244607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.244640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.252439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef46d0 00:57:05.328 [2024-12-09 10:08:12.253151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.253181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.261251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef7100 00:57:05.328 [2024-12-09 10:08:12.261806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.261837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.271972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee7c50 00:57:05.328 [2024-12-09 10:08:12.273143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.273172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.280820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efa3a0 00:57:05.328 [2024-12-09 10:08:12.281867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.281895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.289540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee3d08 00:57:05.328 [2024-12-09 10:08:12.290440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.290470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.297949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee5658 00:57:05.328 [2024-12-09 10:08:12.298763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.298794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.308655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eedd58 00:57:05.328 [2024-12-09 10:08:12.310209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.310239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.315331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee95a0 00:57:05.328 [2024-12-09 10:08:12.315949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.315972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.327132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016edfdc0 00:57:05.328 [2024-12-09 10:08:12.328593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.328620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.335892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee49b0 00:57:05.328 [2024-12-09 10:08:12.337084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.337113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.344517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef96f8 00:57:05.328 [2024-12-09 10:08:12.345584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.345614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.353187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee4578 00:57:05.328 [2024-12-09 10:08:12.354251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.354280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:57:05.328 [2024-12-09 10:08:12.363081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efef90 00:57:05.328 [2024-12-09 10:08:12.364072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.328 [2024-12-09 10:08:12.364160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:57:05.587 [2024-12-09 10:08:12.374016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efb480 00:57:05.587 [2024-12-09 10:08:12.375551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.587 [2024-12-09 10:08:12.375584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:57:05.587 [2024-12-09 10:08:12.381594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efd208 00:57:05.587 [2024-12-09 10:08:12.382235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.587 [2024-12-09 10:08:12.382265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:57:05.587 [2024-12-09 10:08:12.393012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016edf988 00:57:05.587 [2024-12-09 10:08:12.394219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.587 [2024-12-09 10:08:12.394243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:57:05.587 [2024-12-09 10:08:12.401583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efc998 00:57:05.587 [2024-12-09 10:08:12.402392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.587 [2024-12-09 10:08:12.402482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:57:05.587 [2024-12-09 10:08:12.410240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efa3a0 00:57:05.587 [2024-12-09 10:08:12.410980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.411008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.418684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef0ff8 00:57:05.588 [2024-12-09 10:08:12.419255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.419285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.429184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efa7d8 00:57:05.588 [2024-12-09 10:08:12.430306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.430342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.437610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef1868 00:57:05.588 [2024-12-09 10:08:12.438625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.438655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.446419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef5be8 00:57:05.588 [2024-12-09 10:08:12.447636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.447667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.454881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eee5c8 00:57:05.588 [2024-12-09 10:08:12.455761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.455793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.463734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee6fa8 00:57:05.588 [2024-12-09 10:08:12.464692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.464715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.474496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eea680 00:57:05.588 [2024-12-09 10:08:12.475914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.475943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.480953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eed0b0 00:57:05.588 [2024-12-09 10:08:12.481579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.481608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.491795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef0788 00:57:05.588 [2024-12-09 10:08:12.492973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.493064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.500543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee88f8 00:57:05.588 [2024-12-09 10:08:12.501478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.501525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.509816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efc128 00:57:05.588 [2024-12-09 10:08:12.510793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.510825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.519122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efb8b8 00:57:05.588 [2024-12-09 10:08:12.519845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.519932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.527899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016edf550 00:57:05.588 [2024-12-09 10:08:12.528466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.528503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.537718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6458 00:57:05.588 [2024-12-09 10:08:12.538644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.538721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.546049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efb8b8 00:57:05.588 [2024-12-09 10:08:12.546856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.546885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.554319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeb760 00:57:05.588 [2024-12-09 10:08:12.555026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.555054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.562845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eefae0 00:57:05.588 [2024-12-09 10:08:12.563535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.563564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.573240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eeea00 00:57:05.588 [2024-12-09 10:08:12.574533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.574555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.582452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef7970 00:57:05.588 [2024-12-09 10:08:12.583798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.583828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.591326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee12d8 00:57:05.588 [2024-12-09 10:08:12.592460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.592498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.600013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eed920 00:57:05.588 [2024-12-09 10:08:12.601029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.601062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.610003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee49b0 00:57:05.588 [2024-12-09 10:08:12.611071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.611101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.618660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee95a0 00:57:05.588 [2024-12-09 10:08:12.619702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.588 [2024-12-09 10:08:12.619741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:57:05.588 [2024-12-09 10:08:12.628594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eebb98 00:57:05.848 [2024-12-09 10:08:12.629889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.848 [2024-12-09 10:08:12.629974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:57:05.848 [2024-12-09 10:08:12.635050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee8d30 00:57:05.848 [2024-12-09 10:08:12.635741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.848 [2024-12-09 10:08:12.635786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:57:05.848 [2024-12-09 10:08:12.645551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6cc8 00:57:05.848 [2024-12-09 10:08:12.646848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.848 [2024-12-09 10:08:12.646877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:57:05.848 [2024-12-09 10:08:12.653817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee8088 00:57:05.848 [2024-12-09 10:08:12.654771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.848 [2024-12-09 10:08:12.654801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:57:05.848 [2024-12-09 10:08:12.662326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ede038 00:57:05.848 [2024-12-09 10:08:12.663338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.848 [2024-12-09 10:08:12.663365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:57:05.848 [2024-12-09 10:08:12.672851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efbcf0 00:57:05.848 [2024-12-09 10:08:12.674408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.848 [2024-12-09 10:08:12.674437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:57:05.848 [2024-12-09 10:08:12.679163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee0a68 00:57:05.848 [2024-12-09 10:08:12.679803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.848 [2024-12-09 10:08:12.679838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:57:05.848 [2024-12-09 10:08:12.690048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee27f0 00:57:05.848 [2024-12-09 10:08:12.691452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.848 [2024-12-09 10:08:12.691497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:57:05.848 [2024-12-09 10:08:12.698329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eedd58 00:57:05.848 [2024-12-09 10:08:12.699629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.848 [2024-12-09 10:08:12.699656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:57:05.848 [2024-12-09 10:08:12.706644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee8088 00:57:05.848 [2024-12-09 10:08:12.707799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.848 [2024-12-09 10:08:12.707829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:57:05.848 [2024-12-09 10:08:12.714885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efc128 00:57:05.848 [2024-12-09 10:08:12.715912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.848 [2024-12-09 10:08:12.715942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:05.848 [2024-12-09 10:08:12.723864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016eddc00 00:57:05.849 [2024-12-09 10:08:12.724632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.849 [2024-12-09 10:08:12.724721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:57:05.849 [2024-12-09 10:08:12.732193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef6458 00:57:05.849 [2024-12-09 10:08:12.732837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.849 [2024-12-09 10:08:12.732867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:05.849 [2024-12-09 10:08:12.740428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee7c50 00:57:05.849 [2024-12-09 10:08:12.740989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.849 [2024-12-09 10:08:12.741018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:57:05.849 [2024-12-09 10:08:12.751156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef0bc0 00:57:05.849 [2024-12-09 10:08:12.752727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.849 [2024-12-09 10:08:12.752757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:57:05.849 [2024-12-09 10:08:12.757461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016efa3a0 00:57:05.849 [2024-12-09 10:08:12.758133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.849 [2024-12-09 10:08:12.758162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:57:05.849 [2024-12-09 10:08:12.768417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee7818 00:57:05.849 [2024-12-09 10:08:12.769862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.849 [2024-12-09 10:08:12.769891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:57:05.849 [2024-12-09 10:08:12.776777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ef1430 00:57:05.849 [2024-12-09 10:08:12.778035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.849 [2024-12-09 10:08:12.778064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:57:05.849 [2024-12-09 10:08:12.785083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ede470 00:57:05.849 [2024-12-09 10:08:12.786259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.849 [2024-12-09 10:08:12.786287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:57:05.849 [2024-12-09 10:08:12.793005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d570) with pdu=0x200016ee4578 00:57:05.849 [2024-12-09 10:08:12.793889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:57:05.849 [2024-12-09 10:08:12.793919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:05.849 27923.50 IOPS, 109.08 MiB/s 00:57:05.849 Latency(us) 00:57:05.849 [2024-12-09T10:08:12.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:05.849 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:57:05.849 nvme0n1 : 2.00 27932.46 109.11 0.00 0.00 4576.61 1845.88 13107.20 00:57:05.849 [2024-12-09T10:08:12.893Z] =================================================================================================================== 00:57:05.849 [2024-12-09T10:08:12.893Z] Total : 27932.46 109.11 0.00 0.00 4576.61 1845.88 13107.20 00:57:05.849 { 00:57:05.849 "results": [ 00:57:05.849 { 00:57:05.849 "job": "nvme0n1", 00:57:05.849 "core_mask": "0x2", 00:57:05.849 "workload": "randwrite", 00:57:05.849 "status": "finished", 00:57:05.849 "queue_depth": 128, 00:57:05.849 "io_size": 4096, 00:57:05.849 "runtime": 2.003941, 00:57:05.849 "iops": 27932.45908936441, 00:57:05.849 "mibps": 109.11116831782972, 00:57:05.849 "io_failed": 0, 00:57:05.849 "io_timeout": 0, 00:57:05.849 "avg_latency_us": 4576.611670197432, 00:57:05.849 "min_latency_us": 1845.8829694323144, 00:57:05.849 "max_latency_us": 13107.2 00:57:05.849 } 00:57:05.849 ], 00:57:05.849 "core_count": 1 00:57:05.849 } 00:57:05.849 10:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:57:05.849 10:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:57:05.849 10:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:57:05.849 10:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:57:05.849 | .driver_specific 00:57:05.849 | .nvme_error 00:57:05.849 | .status_code 00:57:05.849 | .command_transient_transport_error' 00:57:06.107 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:57:06.108 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95459 00:57:06.108 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95459 ']' 00:57:06.108 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95459 00:57:06.108 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:57:06.108 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:06.108 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95459 00:57:06.108 killing process with pid 95459 00:57:06.108 Received shutdown signal, test time was about 2.000000 seconds 00:57:06.108 00:57:06.108 Latency(us) 00:57:06.108 [2024-12-09T10:08:13.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:06.108 [2024-12-09T10:08:13.152Z] =================================================================================================================== 00:57:06.108 [2024-12-09T10:08:13.152Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:57:06.108 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:57:06.108 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:57:06.108 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95459' 00:57:06.108 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95459 00:57:06.108 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95459 00:57:06.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95538 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95538 /var/tmp/bperf.sock 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95538 ']' 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:06.366 10:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:57:06.366 I/O size of 131072 is greater than zero copy threshold (65536). 00:57:06.366 Zero copy mechanism will not be used. 00:57:06.366 [2024-12-09 10:08:13.312333] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:57:06.366 [2024-12-09 10:08:13.312498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95538 ] 00:57:06.631 [2024-12-09 10:08:13.444239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:06.631 [2024-12-09 10:08:13.497008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:07.231 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:07.231 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:57:07.231 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:57:07.231 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:57:07.489 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:57:07.489 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:07.489 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:57:07.489 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:07.489 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:57:07.489 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:57:07.748 nvme0n1 00:57:07.748 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:57:07.748 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:07.748 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:57:07.748 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:07.748 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:57:07.748 10:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:57:08.008 I/O size of 131072 is greater than zero copy threshold (65536). 00:57:08.008 Zero copy mechanism will not be used. 00:57:08.008 Running I/O for 2 seconds... 00:57:08.008 [2024-12-09 10:08:14.842715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.008 [2024-12-09 10:08:14.842907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.008 [2024-12-09 10:08:14.843007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.008 [2024-12-09 10:08:14.847194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.008 [2024-12-09 10:08:14.847401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.008 [2024-12-09 10:08:14.847505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.008 [2024-12-09 10:08:14.851787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.008 [2024-12-09 10:08:14.851946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.851984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.856080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.856267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.856296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.860496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.860643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.860662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.864452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.864640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.864659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.868242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.868375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.868392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.872136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.872312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.872330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.876040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.876204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.876221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.880160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.880296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.880315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.884103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.884238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.884256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.887978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.888123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.888140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.891864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.891993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.892011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.895857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.895993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.896010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.899757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.899894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.899911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.903632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.903782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.903799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.907415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.907569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.907585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.911205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.911354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.911371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.915100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.915269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.915284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.919020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.919162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.919180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.922833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.922976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.922993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.926683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.926814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.926833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.930427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.930620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.930637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.934338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.934464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.934480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.938258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.938413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.938531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.942248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.942401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.942499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.946667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.946829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.946954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.950885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.951049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.951167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.954974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.955124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.009 [2024-12-09 10:08:14.955248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.009 [2024-12-09 10:08:14.959058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.009 [2024-12-09 10:08:14.959219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:14.959353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:14.963158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:14.963317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:14.963389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:14.966943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:14.967071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:14.967135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:14.970766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:14.970921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:14.971041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:14.974626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:14.974759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:14.974843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:14.978470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:14.978629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:14.978714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:14.982295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:14.982419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:14.982506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:14.986148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:14.986287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:14.986373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:14.990007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:14.990146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:14.990214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:14.993874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:14.994025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:14.994096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:14.997748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:14.997871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:14.997938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:15.001524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:15.001660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:15.001727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:15.005224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:15.005363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:15.005432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:15.009041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:15.009176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:15.009243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:15.012845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:15.012999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:15.013100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:15.016656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:15.016782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:15.016865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:15.020540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:15.020692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:15.020785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:15.024494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:15.024640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:15.024709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:15.028528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:15.028698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:15.028768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:15.032484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:15.032618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:15.032696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:15.036336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:15.036531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:15.036549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:15.040246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:15.040375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:15.040391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.010 [2024-12-09 10:08:15.044072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.010 [2024-12-09 10:08:15.044213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.010 [2024-12-09 10:08:15.044229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.274 [2024-12-09 10:08:15.047974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.274 [2024-12-09 10:08:15.048113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.274 [2024-12-09 10:08:15.048130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.274 [2024-12-09 10:08:15.051811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.274 [2024-12-09 10:08:15.051956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.274 [2024-12-09 10:08:15.051973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.274 [2024-12-09 10:08:15.055616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.274 [2024-12-09 10:08:15.055724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.274 [2024-12-09 10:08:15.055741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.274 [2024-12-09 10:08:15.059325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.274 [2024-12-09 10:08:15.059455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.274 [2024-12-09 10:08:15.059471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.274 [2024-12-09 10:08:15.063239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.274 [2024-12-09 10:08:15.063385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.274 [2024-12-09 10:08:15.063401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.274 [2024-12-09 10:08:15.067123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.274 [2024-12-09 10:08:15.067253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.274 [2024-12-09 10:08:15.067270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.274 [2024-12-09 10:08:15.071006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.274 [2024-12-09 10:08:15.071137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.274 [2024-12-09 10:08:15.071153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.274 [2024-12-09 10:08:15.074810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.274 [2024-12-09 10:08:15.074963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.074980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.078580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.078751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.078767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.082462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.082618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.082635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.086513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.086654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.086671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.090381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.090538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.090554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.094298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.094464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.094491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.098249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.098387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.098404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.102093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.102209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.102226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.106046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.106177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.106192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.110018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.110147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.110163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.113814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.113947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.113963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.117568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.117754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.117771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.121261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.121381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.121397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.125216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.125358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.125374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.129118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.129270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.129287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.132989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.133159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.133177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.136852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.137012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.137029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.140619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.140732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.140751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.144310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.144490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.144506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.148070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.148234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.148250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.152005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.152148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.275 [2024-12-09 10:08:15.152166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.275 [2024-12-09 10:08:15.155777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.275 [2024-12-09 10:08:15.155906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.155922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.159405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.159590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.159607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.163271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.163413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.163430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.167096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.167237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.167253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.170954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.171126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.171141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.174747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.174885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.174901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.178469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.178621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.178637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.182322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.182463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.182479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.186195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.186325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.186341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.190065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.190202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.190218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.193938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.194061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.194078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.197776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.197972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.197989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.201605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.201738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.201753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.205396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.205564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.205580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.209224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.209383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.209399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.213122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.213254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.213269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.217062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.217203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.217219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.220970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.221140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.221156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.224873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.225024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.225052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.228698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.228828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.228845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.232547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.232674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.232691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.236373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.236562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.236613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.276 [2024-12-09 10:08:15.240335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.276 [2024-12-09 10:08:15.240506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.276 [2024-12-09 10:08:15.240537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.244327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.244471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.244500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.248326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.248492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.248531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.252398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.252560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.252576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.256476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.256605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.256622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.260288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.260446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.260474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.264374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.264552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.264570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.268817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.268957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.268976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.273006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.273137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.273155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.277247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.277419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.277437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.281228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.281404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.281422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.285216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.285330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.285346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.289109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.289246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.289263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.292988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.293122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.293139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.296824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.296957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.296974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.300689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.300818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.300835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.304464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.304606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.277 [2024-12-09 10:08:15.304624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.277 [2024-12-09 10:08:15.308306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.277 [2024-12-09 10:08:15.308470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.279 [2024-12-09 10:08:15.308500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.279 [2024-12-09 10:08:15.312298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.279 [2024-12-09 10:08:15.312464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.279 [2024-12-09 10:08:15.312483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.316278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.316411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.316428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.320207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.320381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.320397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.324097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.324244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.324261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.327971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.328115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.328132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.331888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.332035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.332053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.335751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.335949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.335967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.339623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.339805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.339822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.343550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.343666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.343683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.347500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.347623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.347639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.351498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.351649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.351666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.355424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.355631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.355649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.359589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.359749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.359768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.363671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.363823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.363841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.367602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.367751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.367769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.371565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.371709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.371728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.375369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.540 [2024-12-09 10:08:15.375565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.540 [2024-12-09 10:08:15.375584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.540 [2024-12-09 10:08:15.379454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.379602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.379621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.383360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.383515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.383532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.387311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.387446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.387462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.391196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.391389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.391407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.395066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.395197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.395213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.398956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.399100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.399118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.402789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.402940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.402957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.406534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.406669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.406685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.410355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.410533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.410551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.414527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.414644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.414660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.418476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.418617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.418633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.422305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.422435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.422451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.426232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.426368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.426384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.430147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.430287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.430303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.434004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.434168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.434184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.437860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.437990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.438064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.441708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.441851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.441932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.445586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.445753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.445842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.449416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.449568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.449654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.453206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.453334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.453408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.457119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.457290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.457360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.461092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.461229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.461295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.464984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.465113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.465183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.468792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.468944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.469010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.472551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.472734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.472800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.476367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.476541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.476638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.480191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.480345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.541 [2024-12-09 10:08:15.480414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.541 [2024-12-09 10:08:15.484077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.541 [2024-12-09 10:08:15.484238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.484308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.487935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.488083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.488150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.491802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.491952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.492018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.495658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.495833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.495937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.499497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.499654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.499722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.503300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.503466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.503556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.507189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.507364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.507436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.511038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.511206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.511307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.514850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.515013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.515096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.518654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.518791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.518859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.522429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.522565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.522647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.526343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.526525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.526544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.530485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.530667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.530686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.534450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.534642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.534660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.538781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.538938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.538956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.542875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.543027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.543045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.546842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.546993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.547010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.550751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.550914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.550934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.554733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.554867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.554884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.558706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.558846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.558863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.562561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.562731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.562748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.566300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.566438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.566454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.570187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.570328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.570344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.574065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.574181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.574198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.542 [2024-12-09 10:08:15.578134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.542 [2024-12-09 10:08:15.578286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.542 [2024-12-09 10:08:15.578303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.803 [2024-12-09 10:08:15.581984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.803 [2024-12-09 10:08:15.582117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.803 [2024-12-09 10:08:15.582134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.803 [2024-12-09 10:08:15.585807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.803 [2024-12-09 10:08:15.585960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.803 [2024-12-09 10:08:15.585977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.803 [2024-12-09 10:08:15.589646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.803 [2024-12-09 10:08:15.589799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.803 [2024-12-09 10:08:15.589816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.803 [2024-12-09 10:08:15.593499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.803 [2024-12-09 10:08:15.593627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.803 [2024-12-09 10:08:15.593643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.803 [2024-12-09 10:08:15.597558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.803 [2024-12-09 10:08:15.597674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.803 [2024-12-09 10:08:15.597692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.803 [2024-12-09 10:08:15.601436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.803 [2024-12-09 10:08:15.601608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.803 [2024-12-09 10:08:15.601626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.803 [2024-12-09 10:08:15.605467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.803 [2024-12-09 10:08:15.605605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.803 [2024-12-09 10:08:15.605622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.803 [2024-12-09 10:08:15.609350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.803 [2024-12-09 10:08:15.609491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.803 [2024-12-09 10:08:15.609508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.803 [2024-12-09 10:08:15.613448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.613601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.613617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.617383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.617515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.617543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.621293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.621429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.621534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.625150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.625276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.625367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.629045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.629170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.629260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.632951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.633096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.633184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.636844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.636974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.637064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.640751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.640907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.640978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.644626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.644756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.644840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.648445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.648595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.648666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.652250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.652410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.652531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.656223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.656383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.656497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.660348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.660527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.660604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.664302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.664461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.664574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.668267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.668430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.668551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.672296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.672437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.672540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.676424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.676627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.676722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.680567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.680712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.680778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.684721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.684886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.684992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.689121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.689285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.689362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.693392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.693538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.693627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.697556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.697671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.697755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.701754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.701888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.701975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.705716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.705917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.705990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.709861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.709991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.710078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.713953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.714110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.714179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.717862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.717995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.718065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.721778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.721909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.721970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.725825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.725951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.804 [2024-12-09 10:08:15.725969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.804 [2024-12-09 10:08:15.729732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.804 [2024-12-09 10:08:15.729862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.729879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.733661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.733789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.733806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.737619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.737750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.737766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.741390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.741531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.741547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.745171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.745315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.745332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.749117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.749247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.749262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.752964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.753097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.753112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.756742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.756894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.756913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.760677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.760808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.760826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.764413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.764601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.764618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.768328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.768470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.768515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.772440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.772596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.772612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.776315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.776476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.776507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.780256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.780405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.780423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.784313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.784429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.784446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.788202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.788356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.788373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.792124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.792301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.792320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.796048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.796183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.796217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.800351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.800501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.800520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.804606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.804786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.804805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.808789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.808958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.808978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.813107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.813240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.813256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.817332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.817522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.817542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.821573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.821728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.821745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.826010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.826124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.826144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.830392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.830577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.830598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:08.805 [2024-12-09 10:08:15.834712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.834849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.834868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:08.805 7843.00 IOPS, 980.38 MiB/s [2024-12-09T10:08:15.849Z] [2024-12-09 10:08:15.840342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:08.805 [2024-12-09 10:08:15.840509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:08.805 [2024-12-09 10:08:15.840526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.066 [2024-12-09 10:08:15.844158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.066 [2024-12-09 10:08:15.844291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.066 [2024-12-09 10:08:15.844308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.066 [2024-12-09 10:08:15.848168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.066 [2024-12-09 10:08:15.848303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.066 [2024-12-09 10:08:15.848320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.066 [2024-12-09 10:08:15.852092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.066 [2024-12-09 10:08:15.852222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.852239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.855919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.856056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.856073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.859858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.859994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.860011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.863820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.863950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.863967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.867670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.867824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.867840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.871451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.871607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.871623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.875432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.875588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.875605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.879322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.879455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.879484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.883132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.883277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.883303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.886992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.887136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.887153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.890848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.890979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.890996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.894613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.894761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.894779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.898467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.898625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.898642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.902203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.902338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.902354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.906090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.906245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.906262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.909977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.910089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.910105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.913802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.913940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.913957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.917628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.917783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.917801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.921508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.921655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.921671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.925237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.925376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.925392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.929152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.929294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.929312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.933032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.933176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.933192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.936938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.937089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.937105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.940821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.940973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.940990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.944675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.944804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.944821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.948433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.948633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.948650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.952752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.952859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.952876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.956805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.067 [2024-12-09 10:08:15.956952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.067 [2024-12-09 10:08:15.956969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.067 [2024-12-09 10:08:15.960739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:15.960912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:15.960929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:15.964827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:15.964965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:15.964983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:15.968820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:15.968957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:15.968974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:15.972974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:15.973116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:15.973133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:15.977212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:15.977355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:15.977372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:15.981162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:15.981292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:15.981309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:15.985075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:15.985213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:15.985230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:15.989029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:15.989176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:15.989193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:15.992817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:15.992981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:15.992999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:15.996595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:15.996762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:15.996779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.000342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.000498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.000513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.004240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.004384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.004401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.008048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.008193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.008210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.011889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.012029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.012046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.015626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.015761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.015778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.019396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.019566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.019583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.023216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.023357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.023373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.027091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.027225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.027242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.030916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.031056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.031073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.034854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.034999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.035016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.038866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.039009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.039026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.042823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.043021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.043038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.046641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.046772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.046788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.050378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.050539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.050555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.054350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.054505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.054538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.058374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.058534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.058650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.062573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.062690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.062783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.066609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.068 [2024-12-09 10:08:16.066742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.068 [2024-12-09 10:08:16.066808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.068 [2024-12-09 10:08:16.070532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.069 [2024-12-09 10:08:16.070695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.069 [2024-12-09 10:08:16.070767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.069 [2024-12-09 10:08:16.074388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.069 [2024-12-09 10:08:16.074558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.069 [2024-12-09 10:08:16.074623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.069 [2024-12-09 10:08:16.078281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.069 [2024-12-09 10:08:16.078421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.069 [2024-12-09 10:08:16.078439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.069 [2024-12-09 10:08:16.082114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.069 [2024-12-09 10:08:16.082244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.069 [2024-12-09 10:08:16.082260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.069 [2024-12-09 10:08:16.086050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.069 [2024-12-09 10:08:16.086220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.069 [2024-12-09 10:08:16.086237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.069 [2024-12-09 10:08:16.089909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.069 [2024-12-09 10:08:16.090048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.069 [2024-12-09 10:08:16.090064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.069 [2024-12-09 10:08:16.093685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.069 [2024-12-09 10:08:16.093834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.069 [2024-12-09 10:08:16.093851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.069 [2024-12-09 10:08:16.097433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.069 [2024-12-09 10:08:16.097568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.069 [2024-12-09 10:08:16.097583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.069 [2024-12-09 10:08:16.101278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.069 [2024-12-09 10:08:16.101418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.069 [2024-12-09 10:08:16.101434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.069 [2024-12-09 10:08:16.105115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.069 [2024-12-09 10:08:16.105256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.069 [2024-12-09 10:08:16.105272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.108961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.109100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.109115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.112867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.113005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.113023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.116749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.116880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.116896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.120490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.120634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.120650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.124275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.124422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.124439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.128270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.128476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.128512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.132305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.132446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.132463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.136214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.136356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.136373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.140267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.140416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.140433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.144165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.144310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.144327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.148119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.148251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.148268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.152142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.152275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.152292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.156106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.156237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.156254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.160108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.160247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.160264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.163972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.164131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.164147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.167826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.167957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.167975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.330 [2024-12-09 10:08:16.171754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.330 [2024-12-09 10:08:16.171890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.330 [2024-12-09 10:08:16.171907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.175672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.175814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.175834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.179598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.179752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.179769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.183482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.183624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.183640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.187349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.187489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.187507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.191238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.191395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.191411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.195223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.195392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.195409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.199146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.199259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.199275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.203108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.203260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.203277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.207047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.207205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.207222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.211039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.211210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.211226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.214873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.215003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.215032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.218693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.218842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.218858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.222392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.222577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.222594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.226360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.226471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.226500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.230138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.230266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.230283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.234000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.234127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.234143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.237812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.237949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.237965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.241573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.241706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.241722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.245306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.245436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.245452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.249177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.249317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.249333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.253032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.253183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.253199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.257139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.257333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.257349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.260975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.261128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.261145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.264857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.264979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.264995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.268998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.331 [2024-12-09 10:08:16.269170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.331 [2024-12-09 10:08:16.269187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.331 [2024-12-09 10:08:16.273015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.273193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.273210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.276983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.277172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.277191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.281023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.281157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.281173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.285032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.285166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.285183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.289047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.289194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.289210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.292961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.293115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.293132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.297001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.297125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.297142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.300874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.301046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.301062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.304671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.304812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.304829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.308455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.308608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.308624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.312266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.312384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.312401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.316150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.316285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.316302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.319941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.320102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.320119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.323762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.323875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.323892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.327488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.327637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.327653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.331213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.331369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.331385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.335023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.335160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.335176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.338831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.338960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.338976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.342596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.342760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.342776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.346403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.346577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.346593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.350304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.350451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.350466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.354322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.354428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.354444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.358142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.358299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.358315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.362195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.362372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.362389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.366093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.366269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.332 [2024-12-09 10:08:16.366286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.332 [2024-12-09 10:08:16.369966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.332 [2024-12-09 10:08:16.370094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.333 [2024-12-09 10:08:16.370111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.594 [2024-12-09 10:08:16.374045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.594 [2024-12-09 10:08:16.374205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.594 [2024-12-09 10:08:16.374222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.594 [2024-12-09 10:08:16.378279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.594 [2024-12-09 10:08:16.378451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.594 [2024-12-09 10:08:16.378469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.594 [2024-12-09 10:08:16.382528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.594 [2024-12-09 10:08:16.382641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.594 [2024-12-09 10:08:16.382661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.594 [2024-12-09 10:08:16.386614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.594 [2024-12-09 10:08:16.386748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.594 [2024-12-09 10:08:16.386765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.594 [2024-12-09 10:08:16.390573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.594 [2024-12-09 10:08:16.390707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.594 [2024-12-09 10:08:16.390724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.594 [2024-12-09 10:08:16.394491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.594 [2024-12-09 10:08:16.394668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.594 [2024-12-09 10:08:16.394686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.594 [2024-12-09 10:08:16.398412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.594 [2024-12-09 10:08:16.398567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.398585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.402389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.402547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.402563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.406169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.406310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.406326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.409967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.410096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.410112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.413772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.413913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.413929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.417539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.417686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.417702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.421292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.421432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.421449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.425149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.425286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.425302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.428880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.429037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.429054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.432742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.432897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.432913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.436364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.436566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.436583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.440230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.440408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.440424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.444105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.444244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.444260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.447976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.448149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.448167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.451840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.451993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.452011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.455584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.455731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.455748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.459247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.459387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.459403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.463042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.463168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.463185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.466855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.467003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.467020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.470558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.470691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.470708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.474348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.474514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.474530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.478211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.478352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.478367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.482052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.482192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.482208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.485996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.486146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.486162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.595 [2024-12-09 10:08:16.489775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.595 [2024-12-09 10:08:16.489942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.595 [2024-12-09 10:08:16.489959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.493513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.493664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.493681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.497213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.497356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.497372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.501043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.501197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.501213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.504874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.505016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.505032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.508627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.508778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.508794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.512290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.512434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.512450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.516092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.516234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.516250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.519918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.520080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.520097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.523650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.523816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.523833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.527362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.527507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.527541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.531165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.531299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.531314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.534985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.535094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.535166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.538765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.538924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.538992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.542608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.542744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.542811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.546378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.546544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.546609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.550202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.550350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.550418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.554012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.554161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.554230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.557852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.558034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.558101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.561679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.561844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.561911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.565465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.565616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.565699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.569316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.569511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.569586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.573179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.573306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.573368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.577175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.577307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.577376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.581239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.581369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.581440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.585259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.585376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.585445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.589185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.589356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.596 [2024-12-09 10:08:16.589445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.596 [2024-12-09 10:08:16.593113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.596 [2024-12-09 10:08:16.593263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.597 [2024-12-09 10:08:16.593331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.597 [2024-12-09 10:08:16.597136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.597 [2024-12-09 10:08:16.597280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.597 [2024-12-09 10:08:16.597350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.597 [2024-12-09 10:08:16.601160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.597 [2024-12-09 10:08:16.601302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.597 [2024-12-09 10:08:16.601372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.597 [2024-12-09 10:08:16.605213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.597 [2024-12-09 10:08:16.605349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.597 [2024-12-09 10:08:16.605442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.597 [2024-12-09 10:08:16.609574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.597 [2024-12-09 10:08:16.609760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.597 [2024-12-09 10:08:16.609832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.597 [2024-12-09 10:08:16.613622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.597 [2024-12-09 10:08:16.613733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.597 [2024-12-09 10:08:16.613800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.597 [2024-12-09 10:08:16.617639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.597 [2024-12-09 10:08:16.617771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.597 [2024-12-09 10:08:16.617847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.597 [2024-12-09 10:08:16.621620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.597 [2024-12-09 10:08:16.621762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.597 [2024-12-09 10:08:16.621860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.597 [2024-12-09 10:08:16.625707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.597 [2024-12-09 10:08:16.625889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.597 [2024-12-09 10:08:16.625987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.597 [2024-12-09 10:08:16.630115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.597 [2024-12-09 10:08:16.630265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.597 [2024-12-09 10:08:16.630337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.597 [2024-12-09 10:08:16.634462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.597 [2024-12-09 10:08:16.634631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.597 [2024-12-09 10:08:16.634698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.858 [2024-12-09 10:08:16.638531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.858 [2024-12-09 10:08:16.638691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.858 [2024-12-09 10:08:16.638790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.858 [2024-12-09 10:08:16.642535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.858 [2024-12-09 10:08:16.642682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.858 [2024-12-09 10:08:16.642751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.858 [2024-12-09 10:08:16.646440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.858 [2024-12-09 10:08:16.646599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.858 [2024-12-09 10:08:16.646665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.858 [2024-12-09 10:08:16.650364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.858 [2024-12-09 10:08:16.650505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.858 [2024-12-09 10:08:16.650582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.858 [2024-12-09 10:08:16.654321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.858 [2024-12-09 10:08:16.654450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.858 [2024-12-09 10:08:16.654467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.858 [2024-12-09 10:08:16.658199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.858 [2024-12-09 10:08:16.658330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.858 [2024-12-09 10:08:16.658346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.858 [2024-12-09 10:08:16.662097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.858 [2024-12-09 10:08:16.662237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.858 [2024-12-09 10:08:16.662253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.858 [2024-12-09 10:08:16.666042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.858 [2024-12-09 10:08:16.666187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.858 [2024-12-09 10:08:16.666204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.858 [2024-12-09 10:08:16.669897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.858 [2024-12-09 10:08:16.670023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.858 [2024-12-09 10:08:16.670040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.858 [2024-12-09 10:08:16.673768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.858 [2024-12-09 10:08:16.673900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.858 [2024-12-09 10:08:16.673916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.858 [2024-12-09 10:08:16.677719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.858 [2024-12-09 10:08:16.677849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.858 [2024-12-09 10:08:16.677865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.858 [2024-12-09 10:08:16.681631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.858 [2024-12-09 10:08:16.681751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.681768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.685441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.685595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.685612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.689266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.689414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.689431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.693085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.693254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.693272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.696939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.697084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.697100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.700688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.700845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.700862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.704425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.704596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.704612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.708339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.708467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.708496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.712114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.712258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.712274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.715958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.716095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.716111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.719772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.719920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.719935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.723572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.723713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.723730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.727334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.727503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.727520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.731208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.731395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.731412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.735053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.735195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.735265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.738885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.739013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.739085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.742720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.742854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.742917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.746536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.746670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.746735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.750383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.750524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.750586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.754139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.754279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.754344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.757906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.758050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.758118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.761687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.761834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.761905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.765433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.765593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.765704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.769216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.769371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.769446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.773132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.773305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.773419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.777096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.777283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.777361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.781080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.781236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.781316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.785034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.859 [2024-12-09 10:08:16.785243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.859 [2024-12-09 10:08:16.785311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.859 [2024-12-09 10:08:16.788936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.789066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.789150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.860 [2024-12-09 10:08:16.792816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.792954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.793038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.860 [2024-12-09 10:08:16.796639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.796824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.796894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.860 [2024-12-09 10:08:16.800595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.800752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.800823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.860 [2024-12-09 10:08:16.804456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.804623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.804692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.860 [2024-12-09 10:08:16.808306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.808461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.808577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.860 [2024-12-09 10:08:16.812109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.812266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.812334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.860 [2024-12-09 10:08:16.815895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.816048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.816118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.860 [2024-12-09 10:08:16.819703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.819855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.819920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.860 [2024-12-09 10:08:16.823611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.823767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.823847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:57:09.860 [2024-12-09 10:08:16.827658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.827814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.827888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:57:09.860 [2024-12-09 10:08:16.831695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.831850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.831928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:57:09.860 [2024-12-09 10:08:16.835685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc9d8b0) with pdu=0x200016eff3c8 00:57:09.860 [2024-12-09 10:08:16.835835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:57:09.860 [2024-12-09 10:08:16.835920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:57:09.860 7895.00 IOPS, 986.88 MiB/s 00:57:09.860 Latency(us) 00:57:09.860 [2024-12-09T10:08:16.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:09.860 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:57:09.860 nvme0n1 : 2.00 7890.30 986.29 0.00 0.00 2023.91 1624.09 5380.25 00:57:09.860 [2024-12-09T10:08:16.904Z] =================================================================================================================== 00:57:09.860 [2024-12-09T10:08:16.904Z] Total : 7890.30 986.29 0.00 0.00 2023.91 1624.09 5380.25 00:57:09.860 { 00:57:09.860 "results": [ 00:57:09.860 { 00:57:09.860 "job": "nvme0n1", 00:57:09.860 "core_mask": "0x2", 00:57:09.860 "workload": "randwrite", 00:57:09.860 "status": "finished", 00:57:09.860 "queue_depth": 16, 00:57:09.860 "io_size": 131072, 00:57:09.860 "runtime": 2.002713, 00:57:09.860 "iops": 7890.296812374015, 00:57:09.860 "mibps": 986.2871015467518, 00:57:09.860 "io_failed": 0, 00:57:09.860 "io_timeout": 0, 00:57:09.860 "avg_latency_us": 2023.9083991910813, 00:57:09.860 "min_latency_us": 1624.0908296943232, 00:57:09.860 "max_latency_us": 5380.248034934498 00:57:09.860 } 00:57:09.860 ], 00:57:09.860 "core_count": 1 00:57:09.860 } 00:57:09.860 10:08:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:57:09.860 10:08:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:57:09.860 10:08:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:57:09.860 | .driver_specific 00:57:09.860 | .nvme_error 00:57:09.860 | .status_code 00:57:09.860 | .command_transient_transport_error' 00:57:09.860 10:08:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:57:10.120 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 510 > 0 )) 00:57:10.120 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95538 00:57:10.120 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95538 ']' 00:57:10.120 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95538 00:57:10.120 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:57:10.120 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:10.120 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95538 00:57:10.120 killing process with pid 95538 00:57:10.120 Received shutdown signal, test time was about 2.000000 seconds 00:57:10.120 00:57:10.120 Latency(us) 00:57:10.120 [2024-12-09T10:08:17.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:10.120 [2024-12-09T10:08:17.164Z] =================================================================================================================== 00:57:10.120 [2024-12-09T10:08:17.164Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:57:10.120 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:57:10.120 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:57:10.120 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95538' 00:57:10.120 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95538 00:57:10.120 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95538 00:57:10.379 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95240 00:57:10.379 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95240 ']' 00:57:10.379 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95240 00:57:10.379 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:57:10.379 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:10.379 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95240 00:57:10.379 killing process with pid 95240 00:57:10.379 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:10.379 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:10.379 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95240' 00:57:10.379 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95240 00:57:10.379 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95240 00:57:10.637 ************************************ 00:57:10.637 END TEST nvmf_digest_error 00:57:10.637 ************************************ 00:57:10.637 00:57:10.637 real 0m17.522s 00:57:10.637 user 0m33.511s 00:57:10.637 sys 0m4.272s 00:57:10.637 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:10.637 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:57:10.637 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:57:10.637 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:57:10.637 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:10.638 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:57:10.638 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:10.638 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:57:10.638 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:10.638 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:10.896 rmmod nvme_tcp 00:57:10.896 rmmod nvme_fabrics 00:57:10.896 rmmod nvme_keyring 00:57:10.896 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:10.896 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:57:10.896 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:57:10.896 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 95240 ']' 00:57:10.896 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 95240 00:57:10.896 Process with pid 95240 is not found 00:57:10.896 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 95240 ']' 00:57:10.896 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 95240 00:57:10.896 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (95240) - No such process 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 95240 is not found' 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:10.897 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:11.155 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:57:11.155 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:11.155 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:11.155 10:08:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:11.155 10:08:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:57:11.155 00:57:11.155 real 0m36.549s 00:57:11.155 user 1m7.890s 00:57:11.155 sys 0m9.180s 00:57:11.155 ************************************ 00:57:11.155 END TEST nvmf_digest 00:57:11.155 ************************************ 00:57:11.155 10:08:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:11.155 10:08:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:57:11.155 10:08:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:57:11.155 10:08:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:57:11.155 10:08:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:57:11.155 10:08:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:57:11.155 10:08:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:11.155 10:08:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:57:11.155 ************************************ 00:57:11.155 START TEST nvmf_mdns_discovery 00:57:11.155 ************************************ 00:57:11.155 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:57:11.155 * Looking for test storage... 00:57:11.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:57:11.415 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:11.416 --rc genhtml_branch_coverage=1 00:57:11.416 --rc genhtml_function_coverage=1 00:57:11.416 --rc genhtml_legend=1 00:57:11.416 --rc geninfo_all_blocks=1 00:57:11.416 --rc geninfo_unexecuted_blocks=1 00:57:11.416 00:57:11.416 ' 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:11.416 --rc genhtml_branch_coverage=1 00:57:11.416 --rc genhtml_function_coverage=1 00:57:11.416 --rc genhtml_legend=1 00:57:11.416 --rc geninfo_all_blocks=1 00:57:11.416 --rc geninfo_unexecuted_blocks=1 00:57:11.416 00:57:11.416 ' 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:11.416 --rc genhtml_branch_coverage=1 00:57:11.416 --rc genhtml_function_coverage=1 00:57:11.416 --rc genhtml_legend=1 00:57:11.416 --rc geninfo_all_blocks=1 00:57:11.416 --rc geninfo_unexecuted_blocks=1 00:57:11.416 00:57:11.416 ' 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:11.416 --rc genhtml_branch_coverage=1 00:57:11.416 --rc genhtml_function_coverage=1 00:57:11.416 --rc genhtml_legend=1 00:57:11.416 --rc geninfo_all_blocks=1 00:57:11.416 --rc geninfo_unexecuted_blocks=1 00:57:11.416 00:57:11.416 ' 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:57:11.416 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:57:11.416 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:57:11.417 Cannot find device "nvmf_init_br" 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:57:11.417 Cannot find device "nvmf_init_br2" 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:57:11.417 Cannot find device "nvmf_tgt_br" 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:57:11.417 Cannot find device "nvmf_tgt_br2" 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:57:11.417 Cannot find device "nvmf_init_br" 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:57:11.417 Cannot find device "nvmf_init_br2" 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:57:11.417 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:57:11.675 Cannot find device "nvmf_tgt_br" 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:57:11.675 Cannot find device "nvmf_tgt_br2" 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:57:11.675 Cannot find device "nvmf_br" 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:57:11.675 Cannot find device "nvmf_init_if" 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:57:11.675 Cannot find device "nvmf_init_if2" 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:11.675 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:11.675 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:57:11.675 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:57:11.934 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:57:11.934 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:57:11.934 00:57:11.934 --- 10.0.0.3 ping statistics --- 00:57:11.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:11.934 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:57:11.934 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:57:11.934 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.101 ms 00:57:11.934 00:57:11.934 --- 10.0.0.4 ping statistics --- 00:57:11.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:11.934 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:57:11.934 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:57:11.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:11.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:57:11.934 00:57:11.934 --- 10.0.0.1 ping statistics --- 00:57:11.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:11.935 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:57:11.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:11.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:57:11.935 00:57:11.935 --- 10.0.0.2 ping statistics --- 00:57:11.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:11.935 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=95896 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 95896 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95896 ']' 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:11.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:11.935 10:08:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:11.935 [2024-12-09 10:08:18.920068] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:57:11.935 [2024-12-09 10:08:18.920551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:12.193 [2024-12-09 10:08:19.070583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:12.193 [2024-12-09 10:08:19.113910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:12.193 [2024-12-09 10:08:19.113966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:12.193 [2024-12-09 10:08:19.113972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:12.193 [2024-12-09 10:08:19.113977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:12.193 [2024-12-09 10:08:19.113981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:12.193 [2024-12-09 10:08:19.114241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:12.760 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:12.760 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:57:12.760 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:12.760 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:12.760 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:13.021 [2024-12-09 10:08:19.951026] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:13.021 [2024-12-09 10:08:19.963146] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:13.021 null0 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:13.021 null1 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:13.021 10:08:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:13.021 null2 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:13.021 null3 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=95946 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 95946 /tmp/host.sock 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95946 ']' 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:13.021 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:13.021 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:13.281 [2024-12-09 10:08:20.085280] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:57:13.281 [2024-12-09 10:08:20.085357] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95946 ] 00:57:13.281 [2024-12-09 10:08:20.235427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:13.281 [2024-12-09 10:08:20.284236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:14.226 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:14.226 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:57:14.226 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:57:14.226 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:57:14.226 10:08:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:57:14.226 10:08:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=95976 00:57:14.226 10:08:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:57:14.226 10:08:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:57:14.226 10:08:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:57:14.226 Process 1068 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:57:14.226 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:57:14.226 Successfully dropped root privileges. 00:57:14.226 avahi-daemon 0.8 starting up. 00:57:14.226 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:57:14.226 Successfully called chroot(). 00:57:14.226 Successfully dropped remaining capabilities. 00:57:14.226 No service file found in /etc/avahi/services. 00:57:14.226 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:57:14.226 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:57:14.226 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:57:14.226 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:57:14.226 Network interface enumeration completed. 00:57:14.226 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:57:14.226 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:57:14.226 Registering new address record for fe80::2c18:a9ff:fefe:6a0f on nvmf_tgt_if.*. 00:57:14.226 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:57:15.172 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 4257089005. 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.172 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:57:15.431 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.432 [2024-12-09 10:08:22.406468] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.432 [2024-12-09 10:08:22.438832] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.432 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.691 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.691 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:57:15.691 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.691 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.691 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.691 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:57:15.691 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.691 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.691 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.692 10:08:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:57:16.630 [2024-12-09 10:08:23.304737] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:57:16.890 [2024-12-09 10:08:23.703979] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:57:16.890 [2024-12-09 10:08:23.704014] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:57:16.890 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:16.890 cookie is 0 00:57:16.890 is_local: 1 00:57:16.890 our_own: 0 00:57:16.890 wide_area: 0 00:57:16.890 multicast: 1 00:57:16.890 cached: 1 00:57:16.890 [2024-12-09 10:08:23.803796] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:57:16.890 [2024-12-09 10:08:23.803834] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:57:16.890 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:16.890 cookie is 0 00:57:16.890 is_local: 1 00:57:16.890 our_own: 0 00:57:16.890 wide_area: 0 00:57:16.890 multicast: 1 00:57:16.890 cached: 1 00:57:17.829 [2024-12-09 10:08:24.703135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:17.829 [2024-12-09 10:08:24.703219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b7850 with addr=10.0.0.4, port=8009 00:57:17.829 [2024-12-09 10:08:24.703244] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:57:17.829 [2024-12-09 10:08:24.703258] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:57:17.829 [2024-12-09 10:08:24.703265] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:57:17.829 [2024-12-09 10:08:24.805474] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:57:17.829 [2024-12-09 10:08:24.805531] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:57:17.829 [2024-12-09 10:08:24.805550] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:57:18.089 [2024-12-09 10:08:24.893405] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:57:18.089 [2024-12-09 10:08:24.954744] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:57:18.089 [2024-12-09 10:08:24.955541] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24ec820:1 started. 00:57:18.089 [2024-12-09 10:08:24.957241] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:57:18.089 [2024-12-09 10:08:24.957266] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:57:18.089 [2024-12-09 10:08:24.963914] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24ec820 was disconnected and freed. delete nvme_qpair. 00:57:19.040 [2024-12-09 10:08:25.701130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:19.040 [2024-12-09 10:08:25.701197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d53a0 with addr=10.0.0.4, port=8009 00:57:19.040 [2024-12-09 10:08:25.701217] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:57:19.040 [2024-12-09 10:08:25.701224] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:57:19.040 [2024-12-09 10:08:25.701230] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:57:19.978 [2024-12-09 10:08:26.699179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:19.978 [2024-12-09 10:08:26.699226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d5580 with addr=10.0.0.4, port=8009 00:57:19.978 [2024-12-09 10:08:26.699244] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:57:19.978 [2024-12-09 10:08:26.699251] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:57:19.978 [2024-12-09 10:08:26.699258] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:57:20.548 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:57:20.548 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:20.548 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:20.548 [2024-12-09 10:08:27.550692] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:57:20.548 [2024-12-09 10:08:27.552684] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:57:20.548 [2024-12-09 10:08:27.552714] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:20.548 [2024-12-09 10:08:27.562602] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:57:20.548 [2024-12-09 10:08:27.563657] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:20.548 10:08:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:57:20.808 [2024-12-09 10:08:27.695492] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:57:20.808 [2024-12-09 10:08:27.695532] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:57:20.808 [2024-12-09 10:08:27.700119] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:57:20.808 [2024-12-09 10:08:27.700142] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:57:20.808 [2024-12-09 10:08:27.700152] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:57:20.808 [2024-12-09 10:08:27.782100] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:57:20.808 [2024-12-09 10:08:27.786031] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:57:20.808 [2024-12-09 10:08:27.840294] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:57:20.808 [2024-12-09 10:08:27.840805] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x24e9b40:1 started. 00:57:20.808 [2024-12-09 10:08:27.842161] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:57:20.808 [2024-12-09 10:08:27.842194] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:57:20.808 [2024-12-09 10:08:27.848669] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x24e9b40 was disconnected and freed. delete nvme_qpair. 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:57:21.750 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:57:21.750 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:57:21.750 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:57:21.750 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:21.750 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:21.750 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:21.750 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:57:21.750 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:57:21.751 [2024-12-09 10:08:28.694373] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:57:21.751 [2024-12-09 10:08:28.694398] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:57:21.751 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:21.751 cookie is 0 00:57:21.751 is_local: 1 00:57:21.751 our_own: 0 00:57:21.751 wide_area: 0 00:57:21.751 multicast: 1 00:57:21.751 cached: 1 00:57:21.751 [2024-12-09 10:08:28.694408] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:21.751 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:22.012 [2024-12-09 10:08:28.893993] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:57:22.012 [2024-12-09 10:08:28.894015] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:57:22.012 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:22.012 cookie is 0 00:57:22.012 is_local: 1 00:57:22.012 our_own: 0 00:57:22.012 wide_area: 0 00:57:22.012 multicast: 1 00:57:22.012 cached: 1 00:57:22.012 [2024-12-09 10:08:28.894025] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:22.012 [2024-12-09 10:08:28.988613] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24f1c80:1 started. 00:57:22.012 [2024-12-09 10:08:28.991324] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x261ae90:1 started. 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:22.012 10:08:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:57:22.012 [2024-12-09 10:08:28.996872] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24f1c80 was disconnected and freed. delete nvme_qpair. 00:57:22.012 [2024-12-09 10:08:28.997199] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x261ae90 was disconnected and freed. delete nvme_qpair. 00:57:22.985 10:08:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:57:22.985 10:08:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:57:22.985 10:08:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:57:22.985 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:57:22.985 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:22.985 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:57:22.985 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:23.245 [2024-12-09 10:08:30.110837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:57:23.245 [2024-12-09 10:08:30.111618] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:57:23.245 [2024-12-09 10:08:30.111653] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:57:23.245 [2024-12-09 10:08:30.111681] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:57:23.245 [2024-12-09 10:08:30.111692] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:23.245 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:57:23.246 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:23.246 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:23.246 [2024-12-09 10:08:30.118724] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:57:23.246 [2024-12-09 10:08:30.119591] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:57:23.246 [2024-12-09 10:08:30.119635] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:57:23.246 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:23.246 10:08:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:57:23.246 [2024-12-09 10:08:30.253446] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:57:23.246 [2024-12-09 10:08:30.253834] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:57:23.505 [2024-12-09 10:08:30.318961] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:57:23.505 [2024-12-09 10:08:30.319023] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:57:23.505 [2024-12-09 10:08:30.319031] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:57:23.505 [2024-12-09 10:08:30.319036] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:57:23.505 [2024-12-09 10:08:30.319050] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:57:23.505 [2024-12-09 10:08:30.319729] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:57:23.505 [2024-12-09 10:08:30.319758] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:57:23.505 [2024-12-09 10:08:30.319763] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:57:23.505 [2024-12-09 10:08:30.319766] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:57:23.505 [2024-12-09 10:08:30.319776] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:57:23.505 [2024-12-09 10:08:30.364347] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:57:23.505 [2024-12-09 10:08:30.364381] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:57:23.505 [2024-12-09 10:08:30.365320] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:57:23.505 [2024-12-09 10:08:30.365333] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:24.448 [2024-12-09 10:08:31.377753] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:57:24.448 [2024-12-09 10:08:31.377782] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:57:24.448 [2024-12-09 10:08:31.377806] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:57:24.448 [2024-12-09 10:08:31.377816] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:24.448 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:24.448 [2024-12-09 10:08:31.383097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:57:24.448 [2024-12-09 10:08:31.383128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:24.448 [2024-12-09 10:08:31.383138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:57:24.448 [2024-12-09 10:08:31.383145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:24.448 [2024-12-09 10:08:31.383152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:57:24.448 [2024-12-09 10:08:31.383158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:24.448 [2024-12-09 10:08:31.383166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:57:24.448 [2024-12-09 10:08:31.383172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:24.448 [2024-12-09 10:08:31.383179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.448 [2024-12-09 10:08:31.385742] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:57:24.448 [2024-12-09 10:08:31.385778] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:57:24.448 [2024-12-09 10:08:31.385919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:57:24.448 [2024-12-09 10:08:31.385939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:24.448 [2024-12-09 10:08:31.385946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:57:24.448 [2024-12-09 10:08:31.385952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:24.448 [2024-12-09 10:08:31.385960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:57:24.448 [2024-12-09 10:08:31.385966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:24.448 [2024-12-09 10:08:31.385973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:57:24.449 [2024-12-09 10:08:31.385978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:57:24.449 [2024-12-09 10:08:31.385984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.449 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:24.449 10:08:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:57:24.449 [2024-12-09 10:08:31.393046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.449 [2024-12-09 10:08:31.395879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.449 [2024-12-09 10:08:31.403038] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:57:24.449 [2024-12-09 10:08:31.403053] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:57:24.449 [2024-12-09 10:08:31.403057] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:57:24.449 [2024-12-09 10:08:31.403061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:57:24.449 [2024-12-09 10:08:31.403086] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:57:24.449 [2024-12-09 10:08:31.403149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.449 [2024-12-09 10:08:31.403167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461780 with addr=10.0.0.3, port=4420 00:57:24.449 [2024-12-09 10:08:31.403174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.449 [2024-12-09 10:08:31.403184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.449 [2024-12-09 10:08:31.403194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:57:24.449 [2024-12-09 10:08:31.403200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:57:24.449 [2024-12-09 10:08:31.403208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:57:24.449 [2024-12-09 10:08:31.403231] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:57:24.449 [2024-12-09 10:08:31.403235] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:57:24.449 [2024-12-09 10:08:31.403239] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:57:24.449 [2024-12-09 10:08:31.405866] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:57:24.449 [2024-12-09 10:08:31.405879] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:57:24.449 [2024-12-09 10:08:31.405882] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:57:24.449 [2024-12-09 10:08:31.405886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:57:24.449 [2024-12-09 10:08:31.405905] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:57:24.449 [2024-12-09 10:08:31.405939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.449 [2024-12-09 10:08:31.405955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c40 with addr=10.0.0.4, port=4420 00:57:24.449 [2024-12-09 10:08:31.405961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.449 [2024-12-09 10:08:31.405971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.449 [2024-12-09 10:08:31.405979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:57:24.449 [2024-12-09 10:08:31.405984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:57:24.449 [2024-12-09 10:08:31.405990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:57:24.449 [2024-12-09 10:08:31.405996] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:57:24.449 [2024-12-09 10:08:31.405999] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:57:24.449 [2024-12-09 10:08:31.406002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:57:24.449 [2024-12-09 10:08:31.413073] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:57:24.449 [2024-12-09 10:08:31.413090] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:57:24.449 [2024-12-09 10:08:31.413093] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:57:24.449 [2024-12-09 10:08:31.413096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:57:24.449 [2024-12-09 10:08:31.413114] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:57:24.449 [2024-12-09 10:08:31.413145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.449 [2024-12-09 10:08:31.413177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461780 with addr=10.0.0.3, port=4420 00:57:24.449 [2024-12-09 10:08:31.413184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.449 [2024-12-09 10:08:31.413194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.449 [2024-12-09 10:08:31.413202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:57:24.449 [2024-12-09 10:08:31.413208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:57:24.449 [2024-12-09 10:08:31.413214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:57:24.449 [2024-12-09 10:08:31.413219] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:57:24.449 [2024-12-09 10:08:31.413222] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:57:24.449 [2024-12-09 10:08:31.413225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:57:24.449 [2024-12-09 10:08:31.415893] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:57:24.449 [2024-12-09 10:08:31.415910] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:57:24.449 [2024-12-09 10:08:31.415913] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:57:24.449 [2024-12-09 10:08:31.415916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:57:24.449 [2024-12-09 10:08:31.415934] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:57:24.449 [2024-12-09 10:08:31.415966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.449 [2024-12-09 10:08:31.415989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c40 with addr=10.0.0.4, port=4420 00:57:24.449 [2024-12-09 10:08:31.415995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.449 [2024-12-09 10:08:31.416005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.449 [2024-12-09 10:08:31.416013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:57:24.449 [2024-12-09 10:08:31.416018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:57:24.449 [2024-12-09 10:08:31.416039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:57:24.449 [2024-12-09 10:08:31.416045] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:57:24.449 [2024-12-09 10:08:31.416048] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:57:24.449 [2024-12-09 10:08:31.416051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:57:24.449 [2024-12-09 10:08:31.423102] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:57:24.449 [2024-12-09 10:08:31.423117] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:57:24.449 [2024-12-09 10:08:31.423120] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:57:24.449 [2024-12-09 10:08:31.423123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:57:24.449 [2024-12-09 10:08:31.423140] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:57:24.449 [2024-12-09 10:08:31.423169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.449 [2024-12-09 10:08:31.423183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461780 with addr=10.0.0.3, port=4420 00:57:24.449 [2024-12-09 10:08:31.423189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.449 [2024-12-09 10:08:31.423198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.449 [2024-12-09 10:08:31.423206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:57:24.449 [2024-12-09 10:08:31.423211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:57:24.449 [2024-12-09 10:08:31.423216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:57:24.449 [2024-12-09 10:08:31.423221] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:57:24.449 [2024-12-09 10:08:31.423224] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:57:24.449 [2024-12-09 10:08:31.423227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:57:24.449 [2024-12-09 10:08:31.425920] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:57:24.449 [2024-12-09 10:08:31.425933] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:57:24.449 [2024-12-09 10:08:31.425936] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:57:24.449 [2024-12-09 10:08:31.425939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:57:24.449 [2024-12-09 10:08:31.425957] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:57:24.449 [2024-12-09 10:08:31.426000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.449 [2024-12-09 10:08:31.426016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c40 with addr=10.0.0.4, port=4420 00:57:24.449 [2024-12-09 10:08:31.426022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.449 [2024-12-09 10:08:31.426031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.449 [2024-12-09 10:08:31.426039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:57:24.450 [2024-12-09 10:08:31.426044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:57:24.450 [2024-12-09 10:08:31.426050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:57:24.450 [2024-12-09 10:08:31.426055] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:57:24.450 [2024-12-09 10:08:31.426058] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:57:24.450 [2024-12-09 10:08:31.426061] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:57:24.450 [2024-12-09 10:08:31.433129] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:57:24.450 [2024-12-09 10:08:31.433149] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:57:24.450 [2024-12-09 10:08:31.433152] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:57:24.450 [2024-12-09 10:08:31.433155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:57:24.450 [2024-12-09 10:08:31.433173] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:57:24.450 [2024-12-09 10:08:31.433222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.450 [2024-12-09 10:08:31.433238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461780 with addr=10.0.0.3, port=4420 00:57:24.450 [2024-12-09 10:08:31.433245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.450 [2024-12-09 10:08:31.433254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.450 [2024-12-09 10:08:31.433263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:57:24.450 [2024-12-09 10:08:31.433269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:57:24.450 [2024-12-09 10:08:31.433275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:57:24.450 [2024-12-09 10:08:31.433280] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:57:24.450 [2024-12-09 10:08:31.433283] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:57:24.450 [2024-12-09 10:08:31.433287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:57:24.450 [2024-12-09 10:08:31.435944] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:57:24.450 [2024-12-09 10:08:31.435959] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:57:24.450 [2024-12-09 10:08:31.435962] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:57:24.450 [2024-12-09 10:08:31.435965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:57:24.450 [2024-12-09 10:08:31.435984] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:57:24.450 [2024-12-09 10:08:31.436013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.450 [2024-12-09 10:08:31.436029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c40 with addr=10.0.0.4, port=4420 00:57:24.450 [2024-12-09 10:08:31.436036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.450 [2024-12-09 10:08:31.436045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.450 [2024-12-09 10:08:31.436054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:57:24.450 [2024-12-09 10:08:31.436059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:57:24.450 [2024-12-09 10:08:31.436065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:57:24.450 [2024-12-09 10:08:31.436070] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:57:24.450 [2024-12-09 10:08:31.436073] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:57:24.450 [2024-12-09 10:08:31.436076] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:57:24.450 [2024-12-09 10:08:31.443161] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:57:24.450 [2024-12-09 10:08:31.443176] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:57:24.450 [2024-12-09 10:08:31.443179] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:57:24.450 [2024-12-09 10:08:31.443182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:57:24.450 [2024-12-09 10:08:31.443201] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:57:24.450 [2024-12-09 10:08:31.443230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.450 [2024-12-09 10:08:31.443263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461780 with addr=10.0.0.3, port=4420 00:57:24.450 [2024-12-09 10:08:31.443270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.450 [2024-12-09 10:08:31.443280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.450 [2024-12-09 10:08:31.443295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:57:24.450 [2024-12-09 10:08:31.443301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:57:24.450 [2024-12-09 10:08:31.443307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:57:24.450 [2024-12-09 10:08:31.443314] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:57:24.450 [2024-12-09 10:08:31.443318] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:57:24.450 [2024-12-09 10:08:31.443321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:57:24.450 [2024-12-09 10:08:31.445972] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:57:24.450 [2024-12-09 10:08:31.445989] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:57:24.450 [2024-12-09 10:08:31.445992] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:57:24.450 [2024-12-09 10:08:31.445995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:57:24.450 [2024-12-09 10:08:31.446014] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:57:24.450 [2024-12-09 10:08:31.446051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.450 [2024-12-09 10:08:31.446068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c40 with addr=10.0.0.4, port=4420 00:57:24.450 [2024-12-09 10:08:31.446075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.450 [2024-12-09 10:08:31.446086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.450 [2024-12-09 10:08:31.446104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:57:24.450 [2024-12-09 10:08:31.446110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:57:24.450 [2024-12-09 10:08:31.446117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:57:24.450 [2024-12-09 10:08:31.446123] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:57:24.450 [2024-12-09 10:08:31.446126] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:57:24.450 [2024-12-09 10:08:31.446129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:57:24.450 [2024-12-09 10:08:31.453188] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:57:24.450 [2024-12-09 10:08:31.453204] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:57:24.450 [2024-12-09 10:08:31.453207] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:57:24.450 [2024-12-09 10:08:31.453209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:57:24.450 [2024-12-09 10:08:31.453225] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:57:24.450 [2024-12-09 10:08:31.453255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.450 [2024-12-09 10:08:31.453285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461780 with addr=10.0.0.3, port=4420 00:57:24.450 [2024-12-09 10:08:31.453292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.450 [2024-12-09 10:08:31.453301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.450 [2024-12-09 10:08:31.453309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:57:24.450 [2024-12-09 10:08:31.453315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:57:24.450 [2024-12-09 10:08:31.453320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:57:24.450 [2024-12-09 10:08:31.453325] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:57:24.450 [2024-12-09 10:08:31.453329] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:57:24.450 [2024-12-09 10:08:31.453331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:57:24.450 [2024-12-09 10:08:31.456002] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:57:24.450 [2024-12-09 10:08:31.456018] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:57:24.450 [2024-12-09 10:08:31.456021] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:57:24.450 [2024-12-09 10:08:31.456024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:57:24.450 [2024-12-09 10:08:31.456042] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:57:24.450 [2024-12-09 10:08:31.456071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.450 [2024-12-09 10:08:31.456086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c40 with addr=10.0.0.4, port=4420 00:57:24.450 [2024-12-09 10:08:31.456094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.450 [2024-12-09 10:08:31.456103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.450 [2024-12-09 10:08:31.456112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:57:24.450 [2024-12-09 10:08:31.456116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:57:24.451 [2024-12-09 10:08:31.456122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:57:24.451 [2024-12-09 10:08:31.456127] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:57:24.451 [2024-12-09 10:08:31.456130] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:57:24.451 [2024-12-09 10:08:31.456133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:57:24.451 [2024-12-09 10:08:31.463215] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:57:24.451 [2024-12-09 10:08:31.463235] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:57:24.451 [2024-12-09 10:08:31.463239] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:57:24.451 [2024-12-09 10:08:31.463241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:57:24.451 [2024-12-09 10:08:31.463261] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:57:24.451 [2024-12-09 10:08:31.463305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.451 [2024-12-09 10:08:31.463338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461780 with addr=10.0.0.3, port=4420 00:57:24.451 [2024-12-09 10:08:31.463346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.451 [2024-12-09 10:08:31.463356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.451 [2024-12-09 10:08:31.463366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:57:24.451 [2024-12-09 10:08:31.463372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:57:24.451 [2024-12-09 10:08:31.463379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:57:24.451 [2024-12-09 10:08:31.463385] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:57:24.451 [2024-12-09 10:08:31.463388] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:57:24.451 [2024-12-09 10:08:31.463392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:57:24.451 [2024-12-09 10:08:31.466030] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:57:24.451 [2024-12-09 10:08:31.466044] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:57:24.451 [2024-12-09 10:08:31.466048] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:57:24.451 [2024-12-09 10:08:31.466051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:57:24.451 [2024-12-09 10:08:31.466069] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:57:24.451 [2024-12-09 10:08:31.466101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.451 [2024-12-09 10:08:31.466116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c40 with addr=10.0.0.4, port=4420 00:57:24.451 [2024-12-09 10:08:31.466123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.451 [2024-12-09 10:08:31.466140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.451 [2024-12-09 10:08:31.466153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:57:24.451 [2024-12-09 10:08:31.466159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:57:24.451 [2024-12-09 10:08:31.466166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:57:24.451 [2024-12-09 10:08:31.466171] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:57:24.451 [2024-12-09 10:08:31.466175] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:57:24.451 [2024-12-09 10:08:31.466178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:57:24.451 [2024-12-09 10:08:31.473254] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:57:24.451 [2024-12-09 10:08:31.473277] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:57:24.451 [2024-12-09 10:08:31.473281] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:57:24.451 [2024-12-09 10:08:31.473284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:57:24.451 [2024-12-09 10:08:31.473305] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:57:24.451 [2024-12-09 10:08:31.473343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.451 [2024-12-09 10:08:31.473361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461780 with addr=10.0.0.3, port=4420 00:57:24.451 [2024-12-09 10:08:31.473369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.451 [2024-12-09 10:08:31.473379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.451 [2024-12-09 10:08:31.473389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:57:24.451 [2024-12-09 10:08:31.473395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:57:24.451 [2024-12-09 10:08:31.473401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:57:24.451 [2024-12-09 10:08:31.473407] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:57:24.451 [2024-12-09 10:08:31.473411] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:57:24.451 [2024-12-09 10:08:31.473414] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:57:24.451 [2024-12-09 10:08:31.476059] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:57:24.451 [2024-12-09 10:08:31.476081] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:57:24.451 [2024-12-09 10:08:31.476085] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:57:24.451 [2024-12-09 10:08:31.476088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:57:24.451 [2024-12-09 10:08:31.476109] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:57:24.451 [2024-12-09 10:08:31.476145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.451 [2024-12-09 10:08:31.476163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c40 with addr=10.0.0.4, port=4420 00:57:24.451 [2024-12-09 10:08:31.476172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.451 [2024-12-09 10:08:31.476199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.451 [2024-12-09 10:08:31.476209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:57:24.451 [2024-12-09 10:08:31.476215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:57:24.451 [2024-12-09 10:08:31.476222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:57:24.451 [2024-12-09 10:08:31.476229] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:57:24.451 [2024-12-09 10:08:31.476232] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:57:24.451 [2024-12-09 10:08:31.476236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:57:24.451 [2024-12-09 10:08:31.483298] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:57:24.451 [2024-12-09 10:08:31.483314] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:57:24.451 [2024-12-09 10:08:31.483318] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:57:24.451 [2024-12-09 10:08:31.483321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:57:24.451 [2024-12-09 10:08:31.483340] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:57:24.451 [2024-12-09 10:08:31.483380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.451 [2024-12-09 10:08:31.483391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461780 with addr=10.0.0.3, port=4420 00:57:24.451 [2024-12-09 10:08:31.483398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.451 [2024-12-09 10:08:31.483408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.451 [2024-12-09 10:08:31.483417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:57:24.451 [2024-12-09 10:08:31.483424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:57:24.451 [2024-12-09 10:08:31.483430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:57:24.451 [2024-12-09 10:08:31.483436] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:57:24.451 [2024-12-09 10:08:31.483439] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:57:24.451 [2024-12-09 10:08:31.483442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:57:24.451 [2024-12-09 10:08:31.486096] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:57:24.451 [2024-12-09 10:08:31.486110] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:57:24.451 [2024-12-09 10:08:31.486113] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:57:24.451 [2024-12-09 10:08:31.486116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:57:24.451 [2024-12-09 10:08:31.486134] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:57:24.451 [2024-12-09 10:08:31.486162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.451 [2024-12-09 10:08:31.486176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c40 with addr=10.0.0.4, port=4420 00:57:24.451 [2024-12-09 10:08:31.486183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.451 [2024-12-09 10:08:31.486192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.451 [2024-12-09 10:08:31.486201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:57:24.451 [2024-12-09 10:08:31.486206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:57:24.452 [2024-12-09 10:08:31.486212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:57:24.452 [2024-12-09 10:08:31.486217] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:57:24.452 [2024-12-09 10:08:31.486220] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:57:24.452 [2024-12-09 10:08:31.486223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:57:24.712 [2024-12-09 10:08:31.493328] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:57:24.712 [2024-12-09 10:08:31.493344] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:57:24.712 [2024-12-09 10:08:31.493347] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:57:24.712 [2024-12-09 10:08:31.493350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:57:24.712 [2024-12-09 10:08:31.493383] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:57:24.712 [2024-12-09 10:08:31.493415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.712 [2024-12-09 10:08:31.493426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461780 with addr=10.0.0.3, port=4420 00:57:24.712 [2024-12-09 10:08:31.493433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.712 [2024-12-09 10:08:31.493442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.712 [2024-12-09 10:08:31.493461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:57:24.712 [2024-12-09 10:08:31.493467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:57:24.712 [2024-12-09 10:08:31.493472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:57:24.712 [2024-12-09 10:08:31.493477] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:57:24.712 [2024-12-09 10:08:31.493480] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:57:24.712 [2024-12-09 10:08:31.493483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:57:24.712 [2024-12-09 10:08:31.496121] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:57:24.712 [2024-12-09 10:08:31.496137] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:57:24.712 [2024-12-09 10:08:31.496140] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:57:24.712 [2024-12-09 10:08:31.496143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:57:24.712 [2024-12-09 10:08:31.496162] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:57:24.712 [2024-12-09 10:08:31.496191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.712 [2024-12-09 10:08:31.496200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c40 with addr=10.0.0.4, port=4420 00:57:24.712 [2024-12-09 10:08:31.496207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.712 [2024-12-09 10:08:31.496215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.712 [2024-12-09 10:08:31.496242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:57:24.712 [2024-12-09 10:08:31.496248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:57:24.712 [2024-12-09 10:08:31.496254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:57:24.712 [2024-12-09 10:08:31.496260] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:57:24.712 [2024-12-09 10:08:31.496263] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:57:24.712 [2024-12-09 10:08:31.496267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:57:24.712 [2024-12-09 10:08:31.503371] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:57:24.712 [2024-12-09 10:08:31.503387] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:57:24.712 [2024-12-09 10:08:31.503390] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:57:24.712 [2024-12-09 10:08:31.503393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:57:24.712 [2024-12-09 10:08:31.503410] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:57:24.712 [2024-12-09 10:08:31.503439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.712 [2024-12-09 10:08:31.503448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461780 with addr=10.0.0.3, port=4420 00:57:24.712 [2024-12-09 10:08:31.503455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.712 [2024-12-09 10:08:31.503464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.712 [2024-12-09 10:08:31.503472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:57:24.712 [2024-12-09 10:08:31.503486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:57:24.712 [2024-12-09 10:08:31.503492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:57:24.712 [2024-12-09 10:08:31.503497] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:57:24.712 [2024-12-09 10:08:31.503501] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:57:24.712 [2024-12-09 10:08:31.503503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:57:24.712 [2024-12-09 10:08:31.506150] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:57:24.712 [2024-12-09 10:08:31.506163] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:57:24.712 [2024-12-09 10:08:31.506177] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:57:24.712 [2024-12-09 10:08:31.506179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:57:24.712 [2024-12-09 10:08:31.506213] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:57:24.712 [2024-12-09 10:08:31.506241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.712 [2024-12-09 10:08:31.506250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c40 with addr=10.0.0.4, port=4420 00:57:24.712 [2024-12-09 10:08:31.506256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.712 [2024-12-09 10:08:31.506266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.713 [2024-12-09 10:08:31.506275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:57:24.713 [2024-12-09 10:08:31.506281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:57:24.713 [2024-12-09 10:08:31.506287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:57:24.713 [2024-12-09 10:08:31.506292] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:57:24.713 [2024-12-09 10:08:31.506295] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:57:24.713 [2024-12-09 10:08:31.506298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:57:24.713 [2024-12-09 10:08:31.513398] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:57:24.713 [2024-12-09 10:08:31.513416] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:57:24.713 [2024-12-09 10:08:31.513419] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:57:24.713 [2024-12-09 10:08:31.513422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:57:24.713 [2024-12-09 10:08:31.513456] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:57:24.713 [2024-12-09 10:08:31.513498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.713 [2024-12-09 10:08:31.513510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2461780 with addr=10.0.0.3, port=4420 00:57:24.713 [2024-12-09 10:08:31.513517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461780 is same with the state(6) to be set 00:57:24.713 [2024-12-09 10:08:31.513528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461780 (9): Bad file descriptor 00:57:24.713 [2024-12-09 10:08:31.513538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:57:24.713 [2024-12-09 10:08:31.513545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:57:24.713 [2024-12-09 10:08:31.513552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:57:24.713 [2024-12-09 10:08:31.513557] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:57:24.713 [2024-12-09 10:08:31.513561] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:57:24.713 [2024-12-09 10:08:31.513564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:57:24.713 [2024-12-09 10:08:31.515560] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:57:24.713 [2024-12-09 10:08:31.515584] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:57:24.713 [2024-12-09 10:08:31.515599] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:57:24.713 [2024-12-09 10:08:31.516201] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:57:24.713 [2024-12-09 10:08:31.516216] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:57:24.713 [2024-12-09 10:08:31.516220] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:57:24.713 [2024-12-09 10:08:31.516223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:57:24.713 [2024-12-09 10:08:31.516244] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:57:24.713 [2024-12-09 10:08:31.516283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:57:24.713 [2024-12-09 10:08:31.516295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c40 with addr=10.0.0.4, port=4420 00:57:24.713 [2024-12-09 10:08:31.516304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c40 is same with the state(6) to be set 00:57:24.713 [2024-12-09 10:08:31.516314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c40 (9): Bad file descriptor 00:57:24.713 [2024-12-09 10:08:31.516325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:57:24.713 [2024-12-09 10:08:31.516330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:57:24.713 [2024-12-09 10:08:31.516337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:57:24.713 [2024-12-09 10:08:31.516342] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:57:24.713 [2024-12-09 10:08:31.516346] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:57:24.713 [2024-12-09 10:08:31.516349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:57:24.713 [2024-12-09 10:08:31.516581] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:57:24.713 [2024-12-09 10:08:31.516601] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:57:24.713 [2024-12-09 10:08:31.516613] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:57:24.713 [2024-12-09 10:08:31.603498] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:57:24.713 [2024-12-09 10:08:31.603558] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:25.653 10:08:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:57:25.653 [2024-12-09 10:08:32.686694] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:27.035 [2024-12-09 10:08:33.874186] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:57:27.035 2024/12/09 10:08:33 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:57:27.035 request: 00:57:27.035 { 00:57:27.035 "method": "bdev_nvme_start_mdns_discovery", 00:57:27.035 "params": { 00:57:27.035 "name": "mdns", 00:57:27.035 "svcname": "_nvme-disc._http", 00:57:27.035 "hostnqn": "nqn.2021-12.io.spdk:test" 00:57:27.035 } 00:57:27.035 } 00:57:27.035 Got JSON-RPC error response 00:57:27.035 GoRPCClient: error on JSON-RPC call 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:57:27.035 10:08:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:57:27.604 [2024-12-09 10:08:34.457967] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:57:27.604 [2024-12-09 10:08:34.557765] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:57:27.863 [2024-12-09 10:08:34.657582] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:57:27.863 [2024-12-09 10:08:34.657604] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:57:27.863 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:27.863 cookie is 0 00:57:27.863 is_local: 1 00:57:27.863 our_own: 0 00:57:27.863 wide_area: 0 00:57:27.863 multicast: 1 00:57:27.863 cached: 1 00:57:27.864 [2024-12-09 10:08:34.757390] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:57:27.864 [2024-12-09 10:08:34.757413] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:57:27.864 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:27.864 cookie is 0 00:57:27.864 is_local: 1 00:57:27.864 our_own: 0 00:57:27.864 wide_area: 0 00:57:27.864 multicast: 1 00:57:27.864 cached: 1 00:57:27.864 [2024-12-09 10:08:34.757421] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:57:27.864 [2024-12-09 10:08:34.857202] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:57:27.864 [2024-12-09 10:08:34.857230] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:57:27.864 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:27.864 cookie is 0 00:57:27.864 is_local: 1 00:57:27.864 our_own: 0 00:57:27.864 wide_area: 0 00:57:27.864 multicast: 1 00:57:27.864 cached: 1 00:57:28.123 [2024-12-09 10:08:34.957005] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:57:28.123 [2024-12-09 10:08:34.957024] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:57:28.123 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:28.123 cookie is 0 00:57:28.123 is_local: 1 00:57:28.123 our_own: 0 00:57:28.123 wide_area: 0 00:57:28.123 multicast: 1 00:57:28.123 cached: 1 00:57:28.123 [2024-12-09 10:08:34.957031] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:57:28.690 [2024-12-09 10:08:35.665844] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:57:28.690 [2024-12-09 10:08:35.665879] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:57:28.690 [2024-12-09 10:08:35.665890] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:57:28.950 [2024-12-09 10:08:35.751775] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:57:28.950 [2024-12-09 10:08:35.810042] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:57:28.950 [2024-12-09 10:08:35.810599] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x24f5f30:1 started. 00:57:28.950 [2024-12-09 10:08:35.811957] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:57:28.950 [2024-12-09 10:08:35.811993] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:57:28.950 [2024-12-09 10:08:35.814516] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x24f5f30 was disconnected and freed. delete nvme_qpair. 00:57:28.950 [2024-12-09 10:08:35.865407] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:57:28.950 [2024-12-09 10:08:35.865438] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:57:28.950 [2024-12-09 10:08:35.865450] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:57:28.950 [2024-12-09 10:08:35.951335] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:57:29.210 [2024-12-09 10:08:36.009570] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:57:29.210 [2024-12-09 10:08:36.010058] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x24f6ca0:1 started. 00:57:29.210 [2024-12-09 10:08:36.011416] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:57:29.210 [2024-12-09 10:08:36.011441] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:57:29.210 [2024-12-09 10:08:36.014096] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x24f6ca0 was disconnected and freed. delete nvme_qpair. 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:57:32.521 10:08:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:32.521 [2024-12-09 10:08:39.055033] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:57:32.521 2024/12/09 10:08:39 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:57:32.521 request: 00:57:32.521 { 00:57:32.521 "method": "bdev_nvme_start_mdns_discovery", 00:57:32.521 "params": { 00:57:32.521 "name": "cdc", 00:57:32.521 "svcname": "_nvme-disc._tcp", 00:57:32.521 "hostnqn": "nqn.2021-12.io.spdk:test" 00:57:32.521 } 00:57:32.521 } 00:57:32.521 Got JSON-RPC error response 00:57:32.521 GoRPCClient: error on JSON-RPC call 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:57:32.521 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:57:32.522 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:57:32.522 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:57:32.522 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:57:32.522 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:32.522 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:32.522 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:32.522 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:32.522 10:08:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:57:32.522 [2024-12-09 10:08:39.248746] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:57:33.459 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:57:33.459 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:57:33.459 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 95946 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 95946 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 95976 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:57:33.459 Got SIGTERM, quitting. 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:57:33.459 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:57:33.459 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:57:33.459 avahi-daemon 0.8 exiting. 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:33.459 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:33.459 rmmod nvme_tcp 00:57:33.459 rmmod nvme_fabrics 00:57:33.459 rmmod nvme_keyring 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 95896 ']' 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 95896 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 95896 ']' 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 95896 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95896 00:57:33.717 killing process with pid 95896 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95896' 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 95896 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 95896 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:57:33.717 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:33.975 10:08:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:33.975 10:08:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:57:33.975 00:57:33.975 real 0m22.935s 00:57:33.975 user 0m43.795s 00:57:33.975 sys 0m2.523s 00:57:33.975 10:08:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:33.975 10:08:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:33.975 ************************************ 00:57:33.975 END TEST nvmf_mdns_discovery 00:57:33.975 ************************************ 00:57:34.234 10:08:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:57:34.234 10:08:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:57:34.234 10:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:57:34.234 10:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:34.234 10:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:57:34.234 ************************************ 00:57:34.234 START TEST nvmf_host_multipath 00:57:34.234 ************************************ 00:57:34.234 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:57:34.235 * Looking for test storage... 00:57:34.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:57:34.235 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:34.235 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:57:34.235 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:34.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:34.495 --rc genhtml_branch_coverage=1 00:57:34.495 --rc genhtml_function_coverage=1 00:57:34.495 --rc genhtml_legend=1 00:57:34.495 --rc geninfo_all_blocks=1 00:57:34.495 --rc geninfo_unexecuted_blocks=1 00:57:34.495 00:57:34.495 ' 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:34.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:34.495 --rc genhtml_branch_coverage=1 00:57:34.495 --rc genhtml_function_coverage=1 00:57:34.495 --rc genhtml_legend=1 00:57:34.495 --rc geninfo_all_blocks=1 00:57:34.495 --rc geninfo_unexecuted_blocks=1 00:57:34.495 00:57:34.495 ' 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:34.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:34.495 --rc genhtml_branch_coverage=1 00:57:34.495 --rc genhtml_function_coverage=1 00:57:34.495 --rc genhtml_legend=1 00:57:34.495 --rc geninfo_all_blocks=1 00:57:34.495 --rc geninfo_unexecuted_blocks=1 00:57:34.495 00:57:34.495 ' 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:34.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:34.495 --rc genhtml_branch_coverage=1 00:57:34.495 --rc genhtml_function_coverage=1 00:57:34.495 --rc genhtml_legend=1 00:57:34.495 --rc geninfo_all_blocks=1 00:57:34.495 --rc geninfo_unexecuted_blocks=1 00:57:34.495 00:57:34.495 ' 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:57:34.495 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:57:34.495 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:57:34.496 Cannot find device "nvmf_init_br" 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:57:34.496 Cannot find device "nvmf_init_br2" 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:57:34.496 Cannot find device "nvmf_tgt_br" 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:57:34.496 Cannot find device "nvmf_tgt_br2" 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:57:34.496 Cannot find device "nvmf_init_br" 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:57:34.496 Cannot find device "nvmf_init_br2" 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:57:34.496 Cannot find device "nvmf_tgt_br" 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:57:34.496 Cannot find device "nvmf_tgt_br2" 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:57:34.496 Cannot find device "nvmf_br" 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:57:34.496 Cannot find device "nvmf_init_if" 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:57:34.496 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:57:34.756 Cannot find device "nvmf_init_if2" 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:34.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:34.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:57:34.756 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:57:34.756 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 00:57:34.756 00:57:34.756 --- 10.0.0.3 ping statistics --- 00:57:34.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:34.756 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:57:34.756 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:57:34.756 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:57:34.756 00:57:34.756 --- 10.0.0.4 ping statistics --- 00:57:34.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:34.756 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:57:34.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:34.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:57:34.756 00:57:34.756 --- 10.0.0.1 ping statistics --- 00:57:34.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:34.756 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:57:34.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:34.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:57:34.756 00:57:34.756 --- 10.0.0.2 ping statistics --- 00:57:34.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:34.756 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:57:34.756 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=96633 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 96633 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 96633 ']' 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:34.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:34.757 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:57:35.016 [2024-12-09 10:08:41.843590] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:57:35.016 [2024-12-09 10:08:41.843727] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:35.016 [2024-12-09 10:08:41.995401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:57:35.016 [2024-12-09 10:08:42.040807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:35.016 [2024-12-09 10:08:42.040851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:35.016 [2024-12-09 10:08:42.040858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:35.016 [2024-12-09 10:08:42.040862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:35.016 [2024-12-09 10:08:42.040867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:35.016 [2024-12-09 10:08:42.041730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:35.016 [2024-12-09 10:08:42.041730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:35.953 10:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:35.953 10:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:57:35.954 10:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:35.954 10:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:35.954 10:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:57:35.954 10:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:35.954 10:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96633 00:57:35.954 10:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:57:35.954 [2024-12-09 10:08:42.956060] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:35.954 10:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:57:36.213 Malloc0 00:57:36.213 10:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:57:36.472 10:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:57:36.731 10:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:36.990 [2024-12-09 10:08:43.784198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:36.990 10:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:57:36.990 [2024-12-09 10:08:44.003836] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:57:36.990 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96732 00:57:36.990 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:57:36.990 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:57:36.990 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96732 /var/tmp/bdevperf.sock 00:57:36.990 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 96732 ']' 00:57:36.990 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:57:36.990 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:36.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:57:36.990 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:57:36.990 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:36.990 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:57:37.929 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:37.929 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:57:37.929 10:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:57:38.188 10:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:57:38.447 Nvme0n1 00:57:38.707 10:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:57:38.966 Nvme0n1 00:57:38.966 10:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:57:38.966 10:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:57:39.906 10:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:57:39.906 10:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:57:40.167 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:57:40.426 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:57:40.426 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96821 00:57:40.426 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96633 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:57:40.427 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:57:46.999 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:57:46.999 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:57:46.999 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:57:46.999 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:57:46.999 Attaching 4 probes... 00:57:46.999 @path[10.0.0.3, 4421]: 22214 00:57:46.999 @path[10.0.0.3, 4421]: 23193 00:57:46.999 @path[10.0.0.3, 4421]: 23380 00:57:47.000 @path[10.0.0.3, 4421]: 23191 00:57:47.000 @path[10.0.0.3, 4421]: 23579 00:57:47.000 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:57:47.000 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:57:47.000 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:57:47.000 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:57:47.000 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:57:47.000 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:57:47.000 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96821 00:57:47.000 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:57:47.000 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:57:47.000 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:57:47.000 10:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:57:47.000 10:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:57:47.000 10:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96951 00:57:47.000 10:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96633 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:57:47.000 10:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:57:53.574 Attaching 4 probes... 00:57:53.574 @path[10.0.0.3, 4420]: 23235 00:57:53.574 @path[10.0.0.3, 4420]: 22241 00:57:53.574 @path[10.0.0.3, 4420]: 23452 00:57:53.574 @path[10.0.0.3, 4420]: 21974 00:57:53.574 @path[10.0.0.3, 4420]: 23340 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96951 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:57:53.574 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:57:53.834 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:57:53.834 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96633 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:57:53.834 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97089 00:57:53.834 10:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:58:00.427 10:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:58:00.427 10:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:58:00.427 Attaching 4 probes... 00:58:00.427 @path[10.0.0.3, 4421]: 16374 00:58:00.427 @path[10.0.0.3, 4421]: 22832 00:58:00.427 @path[10.0.0.3, 4421]: 22999 00:58:00.427 @path[10.0.0.3, 4421]: 22436 00:58:00.427 @path[10.0.0.3, 4421]: 22808 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97089 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:58:00.427 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:58:00.687 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:58:00.687 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97219 00:58:00.687 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:58:00.688 10:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96633 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:58:07.258 Attaching 4 probes... 00:58:07.258 00:58:07.258 00:58:07.258 00:58:07.258 00:58:07.258 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97219 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:58:07.258 10:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:58:07.258 10:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:58:07.258 10:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97351 00:58:07.258 10:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96633 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:58:07.258 10:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:58:13.849 Attaching 4 probes... 00:58:13.849 @path[10.0.0.3, 4421]: 21113 00:58:13.849 @path[10.0.0.3, 4421]: 22960 00:58:13.849 @path[10.0.0.3, 4421]: 23416 00:58:13.849 @path[10.0.0.3, 4421]: 23042 00:58:13.849 @path[10.0.0.3, 4421]: 23173 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97351 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:58:13.849 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:58:13.849 [2024-12-09 10:09:20.703937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.703984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.703991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.703997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.849 [2024-12-09 10:09:20.704161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.850 [2024-12-09 10:09:20.704166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.850 [2024-12-09 10:09:20.704170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.850 [2024-12-09 10:09:20.704175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.850 [2024-12-09 10:09:20.704179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeace90 is same with the state(6) to be set 00:58:13.850 10:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:58:14.802 10:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:58:14.802 10:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97486 00:58:14.802 10:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96633 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:58:14.802 10:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:58:21.406 Attaching 4 probes... 00:58:21.406 @path[10.0.0.3, 4420]: 22466 00:58:21.406 @path[10.0.0.3, 4420]: 22951 00:58:21.406 @path[10.0.0.3, 4420]: 22978 00:58:21.406 @path[10.0.0.3, 4420]: 23138 00:58:21.406 @path[10.0.0.3, 4420]: 23247 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97486 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:58:21.406 10:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:58:21.406 [2024-12-09 10:09:28.166896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:58:21.406 10:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:58:21.406 10:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:58:27.980 10:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:58:27.980 10:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97678 00:58:27.980 10:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:58:27.980 10:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96633 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:58:34.599 Attaching 4 probes... 00:58:34.599 @path[10.0.0.3, 4421]: 22830 00:58:34.599 @path[10.0.0.3, 4421]: 23181 00:58:34.599 @path[10.0.0.3, 4421]: 22974 00:58:34.599 @path[10.0.0.3, 4421]: 23049 00:58:34.599 @path[10.0.0.3, 4421]: 22898 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97678 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96732 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 96732 ']' 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 96732 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96732 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:58:34.599 killing process with pid 96732 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96732' 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 96732 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 96732 00:58:34.599 { 00:58:34.599 "results": [ 00:58:34.599 { 00:58:34.599 "job": "Nvme0n1", 00:58:34.599 "core_mask": "0x4", 00:58:34.599 "workload": "verify", 00:58:34.599 "status": "terminated", 00:58:34.599 "verify_range": { 00:58:34.599 "start": 0, 00:58:34.599 "length": 16384 00:58:34.599 }, 00:58:34.599 "queue_depth": 128, 00:58:34.599 "io_size": 4096, 00:58:34.599 "runtime": 54.82812, 00:58:34.599 "iops": 9728.438618723385, 00:58:34.599 "mibps": 38.00171335438822, 00:58:34.599 "io_failed": 0, 00:58:34.599 "io_timeout": 0, 00:58:34.599 "avg_latency_us": 13137.345969393147, 00:58:34.599 "min_latency_us": 1416.6078602620087, 00:58:34.599 "max_latency_us": 7091853.75021834 00:58:34.599 } 00:58:34.599 ], 00:58:34.599 "core_count": 1 00:58:34.599 } 00:58:34.599 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96732 00:58:34.600 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:58:34.600 [2024-12-09 10:08:44.077574] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:58:34.600 [2024-12-09 10:08:44.077655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96732 ] 00:58:34.600 [2024-12-09 10:08:44.224363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:34.600 [2024-12-09 10:08:44.273770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:58:34.600 Running I/O for 90 seconds... 00:58:34.600 11820.00 IOPS, 46.17 MiB/s [2024-12-09T10:09:41.644Z] 11574.00 IOPS, 45.21 MiB/s [2024-12-09T10:09:41.644Z] 11493.00 IOPS, 44.89 MiB/s [2024-12-09T10:09:41.644Z] 11511.75 IOPS, 44.97 MiB/s [2024-12-09T10:09:41.644Z] 11545.60 IOPS, 45.10 MiB/s [2024-12-09T10:09:41.644Z] 11545.33 IOPS, 45.10 MiB/s [2024-12-09T10:09:41.644Z] 11583.29 IOPS, 45.25 MiB/s [2024-12-09T10:09:41.644Z] 11581.25 IOPS, 45.24 MiB/s [2024-12-09T10:09:41.644Z] [2024-12-09 10:08:54.009234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.009978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.009993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.010002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.010017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.010026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.010040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.010050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.010064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.010073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.010088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.010097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.010584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.010606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.010625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.010635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.600 [2024-12-09 10:08:54.010650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.600 [2024-12-09 10:08:54.010668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.010683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.010693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.010707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.010716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.010731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.010741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.010756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.010765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.010780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.010789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.010840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.010852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.010867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.010877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.010892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.010901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.010916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.010925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.010940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.010949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.010964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.010973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.010989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.011004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.011029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.011053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.601 [2024-12-09 10:08:54.011656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.011680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.601 [2024-12-09 10:08:54.011704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.601 [2024-12-09 10:08:54.011719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.011729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.011744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.011753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.011767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.011777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.011792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.011801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.011817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.011826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.012988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.012998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.013012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.013021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.013034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.013043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.013058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.013067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.013080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.013090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.602 [2024-12-09 10:08:54.013104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.602 [2024-12-09 10:08:54.013113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:08:54.013135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:08:54.013158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:08:54.013181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:08:54.013204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:08:54.013228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:08:54.013250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:08:54.013279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:08:54.013306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:08:54.013329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:08:54.013351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:08:54.013374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:08:54.013398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:08:54.013412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:08:54.013422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.603 11576.44 IOPS, 45.22 MiB/s [2024-12-09T10:09:41.647Z] 11524.60 IOPS, 45.02 MiB/s [2024-12-09T10:09:41.647Z] 11527.00 IOPS, 45.03 MiB/s [2024-12-09T10:09:41.647Z] 11520.83 IOPS, 45.00 MiB/s [2024-12-09T10:09:41.647Z] 11502.08 IOPS, 44.93 MiB/s [2024-12-09T10:09:41.647Z] 11501.29 IOPS, 44.93 MiB/s [2024-12-09T10:09:41.647Z] [2024-12-09 10:09:00.525922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.525985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.526021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:09:00.526059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:09:00.526082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:09:00.526126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.526150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.526172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.526194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.526217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.526240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.526262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.526284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.526306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.526329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.526342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.526351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.527018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.603 [2024-12-09 10:09:00.527040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.527059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:09:00.527069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.527095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:09:00.527106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.527121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:09:00.527131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.527146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:09:00.527155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.527170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:09:00.527181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.527197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:09:00.527206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.527220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:09:00.527230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.527245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.603 [2024-12-09 10:09:00.527254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.603 [2024-12-09 10:09:00.527269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.527888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.527898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.528234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.528252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.528270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.528279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.528293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.528303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.528318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.528327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.528341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.528359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.528374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.528384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.528398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.528408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.528422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.604 [2024-12-09 10:09:00.528432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.604 [2024-12-09 10:09:00.528447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.528977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.528986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.605 [2024-12-09 10:09:00.529419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.605 [2024-12-09 10:09:00.529430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.529812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.529824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.530938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.606 [2024-12-09 10:09:00.530969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.530986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.606 [2024-12-09 10:09:00.530996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.531012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.531023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.531039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.531049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.531066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.606 [2024-12-09 10:09:00.531077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.531093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.606 [2024-12-09 10:09:00.531104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.531121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.606 [2024-12-09 10:09:00.531131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.606 [2024-12-09 10:09:00.531147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.606 [2024-12-09 10:09:00.531158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.607 [2024-12-09 10:09:00.531185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.607 [2024-12-09 10:09:00.531212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.607 [2024-12-09 10:09:00.531241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.607 [2024-12-09 10:09:00.531272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.607 [2024-12-09 10:09:00.531309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.607 [2024-12-09 10:09:00.531337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.607 [2024-12-09 10:09:00.531364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.607 [2024-12-09 10:09:00.531391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.531753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.531764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.607 [2024-12-09 10:09:00.548799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.607 [2024-12-09 10:09:00.548819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.548831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.549688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.549715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.549738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.549751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.549771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.549784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.549804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.549816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.549836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.549848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.549868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.549880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.549900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.549912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.549932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.549945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.549964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.549976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.549996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.608 [2024-12-09 10:09:00.550769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.608 [2024-12-09 10:09:00.550789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.550801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.550820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.550832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.550851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.550863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.550882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.550894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.550919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.550932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.550951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.550963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.550982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.550994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.551775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.551787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.552581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.552605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.552628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.552640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.552660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.552672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.552691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.552703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.552723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.552735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.552754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.552766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.552786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.552798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.552817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.552829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.552849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.609 [2024-12-09 10:09:00.552861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.609 [2024-12-09 10:09:00.552880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.552901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.552921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.552933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.552953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.552965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.552985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.552997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.610 [2024-12-09 10:09:00.553614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.553937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.553957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.561308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.561339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.561354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.561378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.561393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.561417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.561431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.561455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.561493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.561517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.561532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.610 [2024-12-09 10:09:00.561555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.610 [2024-12-09 10:09:00.561569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.561592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.561607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.561630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.561645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.561668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.561683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.561706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.561720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.561744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.561758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.561782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.561796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.561819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.561834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.561857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.561871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.561896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.561910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.561933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.561955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.561978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.561993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.562016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.562031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.562054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.562068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.562092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.562107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.563980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.563994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.564018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.611 [2024-12-09 10:09:00.564032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.611 [2024-12-09 10:09:00.564055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.564972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.564995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.565010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.565033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.565047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.565071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.565085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.565108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.565123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.565147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.565162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.565185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.565200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.565224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.565239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.565261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.565276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.565308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.565323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.565347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.565362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.612 [2024-12-09 10:09:00.565385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.612 [2024-12-09 10:09:00.565400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.565423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.565438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.565460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.565486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.565510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.565525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.565548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.565563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.566720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.566758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.566794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.566816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.566849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.566870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.566902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.566923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.566955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.566976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.567728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.567782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.567941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.567973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.567994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.568047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.568099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.568153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.568207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.568260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.568312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.568373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.568427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.568493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.613 [2024-12-09 10:09:00.568547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.568601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.613 [2024-12-09 10:09:00.568654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.613 [2024-12-09 10:09:00.568686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.568706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.568739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.568759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.568792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.568813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.568846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.568866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.568899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.568919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.568952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.568972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.569961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.569994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.570015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.570048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.570068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.570100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.570121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.570154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.570174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.571410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.571457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.571513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.571535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.571567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.571587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.571619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.571654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.571687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.571707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.571739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.571785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.571820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.571841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.571873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.571894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.571926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.571947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.571980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.572000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.572032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.572052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.614 [2024-12-09 10:09:00.572085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.614 [2024-12-09 10:09:00.572105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.572954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.572987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.573958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.573982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.574016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.574042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.574074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.574099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.574132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.574157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.574190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.574215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.574248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.574273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.574306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.615 [2024-12-09 10:09:00.574331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.615 [2024-12-09 10:09:00.574364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.574389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.574422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.574459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.574510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.574536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.574569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.574594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.574627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.574652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.574690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.574716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.574750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.574774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.574808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.574833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.574866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.574891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.574924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.574948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.574982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.575007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.575040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.575065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.575099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.575124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.576977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.576998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.616 [2024-12-09 10:09:00.577012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.577033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.616 [2024-12-09 10:09:00.577047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.577068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.577082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.577103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.577117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.577138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.616 [2024-12-09 10:09:00.577152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.577173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.616 [2024-12-09 10:09:00.577186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.577208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.616 [2024-12-09 10:09:00.577221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.616 [2024-12-09 10:09:00.577249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.617 [2024-12-09 10:09:00.577263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.617 [2024-12-09 10:09:00.577298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.617 [2024-12-09 10:09:00.577333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.617 [2024-12-09 10:09:00.577368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.617 [2024-12-09 10:09:00.577403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.617 [2024-12-09 10:09:00.577438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.617 [2024-12-09 10:09:00.577484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.617 [2024-12-09 10:09:00.577520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.617 [2024-12-09 10:09:00.577555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.577589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.577624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.577659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.577701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.577737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.577771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.577806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.577841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.577876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.577911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.577945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.577966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.577980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.617 [2024-12-09 10:09:00.578429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.617 [2024-12-09 10:09:00.578442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.578464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.578487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.578509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.578523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.578544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.578558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.578587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.578601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.579971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.579992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.618 [2024-12-09 10:09:00.580647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.618 [2024-12-09 10:09:00.580668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.580688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.580709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.580723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.580744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.580757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.580778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.580791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.580813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.580826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.580847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.580861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.580882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.580896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.580917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.580930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.580952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.580965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.580987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.581714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.581728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.582612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.582648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.582675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.582689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.582711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.582725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.582746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.582760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.582781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.582794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.582816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.582829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.582850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.582864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.582885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.582899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.582920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.582944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.582966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.619 [2024-12-09 10:09:00.582979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.619 [2024-12-09 10:09:00.583001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.620 [2024-12-09 10:09:00.583903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.583973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.583994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.584029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.584064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.584099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.584134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.584169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.584203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.584238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.584278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.584314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.584349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.584384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.620 [2024-12-09 10:09:00.584419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.620 [2024-12-09 10:09:00.584432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.584887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.584900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.585696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.585733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.585758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.585772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.585794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.585808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.585829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.585843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.585864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.585878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.585909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.585918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.585933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.585949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.585963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.585973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.585988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.585997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.621 [2024-12-09 10:09:00.586309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.621 [2024-12-09 10:09:00.586324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.586985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.586996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.587023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.587050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.587077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.587104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.587131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.587158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.587189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.587215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.587243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.587269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.587304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.622 [2024-12-09 10:09:00.587331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.622 [2024-12-09 10:09:00.587348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.587358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.587374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.587385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.587402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.587412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.623 [2024-12-09 10:09:00.588685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.623 [2024-12-09 10:09:00.588712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.623 [2024-12-09 10:09:00.588799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.623 [2024-12-09 10:09:00.588824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.623 [2024-12-09 10:09:00.588847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.623 [2024-12-09 10:09:00.588872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.623 [2024-12-09 10:09:00.588899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.623 [2024-12-09 10:09:00.588923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.623 [2024-12-09 10:09:00.588947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.623 [2024-12-09 10:09:00.588971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.588986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.623 [2024-12-09 10:09:00.588995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.589009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.623 [2024-12-09 10:09:00.589019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.623 [2024-12-09 10:09:00.589033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.623 [2024-12-09 10:09:00.589042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.624 [2024-12-09 10:09:00.589066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.589716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.589725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.590257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.590275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.590292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.590301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.590323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.590332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.590347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.590356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.590371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.590380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.590394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.590404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.590418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.590428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.590442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.590451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.590466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.590489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.590504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.590513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.590528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.624 [2024-12-09 10:09:00.590538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.624 [2024-12-09 10:09:00.590552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.590990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.590999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.625 [2024-12-09 10:09:00.591587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.625 [2024-12-09 10:09:00.591608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.591635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.591662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.591688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.591715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.591742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.591769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.591796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.591822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.591849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.591876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.591903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.591930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.591946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.592981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.592996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.593005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.593019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.593028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.593043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.593052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.593067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.593076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.593090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.593099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.626 [2024-12-09 10:09:00.593114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.626 [2024-12-09 10:09:00.593123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.627 [2024-12-09 10:09:00.593526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.593978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.593993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.594002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.594016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.594025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.594040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.627 [2024-12-09 10:09:00.594049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.627 [2024-12-09 10:09:00.594063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.594984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.594993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.628 [2024-12-09 10:09:00.595589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.628 [2024-12-09 10:09:00.595603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.595990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.595999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.596985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.596994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.597009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.597018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.597040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.597049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.597064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.597073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.597087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.597096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.597111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.597121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.629 [2024-12-09 10:09:00.597137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.629 [2024-12-09 10:09:00.597146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.630 [2024-12-09 10:09:00.597781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.630 [2024-12-09 10:09:00.597988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.630 [2024-12-09 10:09:00.597997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.598978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.598995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.599004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.599022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.599031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.599049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.599058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.599076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.599086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.599103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.599112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.599130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.599144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.599162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.599171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.599189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.599198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.599215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.599225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.599243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.599252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.599270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.631 [2024-12-09 10:09:00.599279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.631 [2024-12-09 10:09:00.599306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.599982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.599999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.632 [2024-12-09 10:09:00.600394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.632 [2024-12-09 10:09:00.600412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:00.600421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:00.600568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:00.600581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.633 11129.00 IOPS, 43.47 MiB/s [2024-12-09T10:09:41.677Z] 10697.94 IOPS, 41.79 MiB/s [2024-12-09T10:09:41.677Z] 10726.35 IOPS, 41.90 MiB/s [2024-12-09T10:09:41.677Z] 10777.06 IOPS, 42.10 MiB/s [2024-12-09T10:09:41.677Z] 10799.00 IOPS, 42.18 MiB/s [2024-12-09T10:09:41.677Z] 10830.30 IOPS, 42.31 MiB/s [2024-12-09T10:09:41.677Z] 10822.67 IOPS, 42.28 MiB/s [2024-12-09T10:09:41.677Z] [2024-12-09 10:09:07.470596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.470975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.470996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.471978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.471987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.472002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.472013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.472027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.472037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.472051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.633 [2024-12-09 10:09:07.472060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.633 [2024-12-09 10:09:07.472076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.472983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.472993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.634 [2024-12-09 10:09:07.473304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.634 [2024-12-09 10:09:07.473313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.473982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.473997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.635 [2024-12-09 10:09:07.474527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.635 [2024-12-09 10:09:07.474546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.636 [2024-12-09 10:09:07.474827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.636 [2024-12-09 10:09:07.474850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.636 [2024-12-09 10:09:07.474872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.636 [2024-12-09 10:09:07.474895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.474985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.474999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.475394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.475403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.476003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.476024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.476043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.636 [2024-12-09 10:09:07.476052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.636 [2024-12-09 10:09:07.476067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.476984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.476998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.477007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.637 [2024-12-09 10:09:07.477021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.637 [2024-12-09 10:09:07.477029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.477982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.477995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.638 [2024-12-09 10:09:07.478415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.638 [2024-12-09 10:09:07.478428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.478438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.478451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.478460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.478485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.478495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.478509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.478517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.478532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.478545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.485705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.485734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.485759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.485783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.485806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.485830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.485854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.485877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.485901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.485924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.485948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.485972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.485997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.639 [2024-12-09 10:09:07.486125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.639 [2024-12-09 10:09:07.486150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.639 [2024-12-09 10:09:07.486173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.639 [2024-12-09 10:09:07.486197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.639 [2024-12-09 10:09:07.486391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.639 [2024-12-09 10:09:07.486405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.486414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.486428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.486438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.486452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.486461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.486487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.486497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.486512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.486521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.486535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.486544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.486559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.486568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.486582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.486597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.486611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.486620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.486635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.486644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.486658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.486667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.486682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.486691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.640 [2024-12-09 10:09:07.487847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.640 [2024-12-09 10:09:07.487856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.487871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.487880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.487895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.487904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.487918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.487928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.487942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.487951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.487965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.487975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.487989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.487998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.641 [2024-12-09 10:09:07.488510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.641 [2024-12-09 10:09:07.488519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.488533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.488543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.488557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.488566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.488581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.488590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.488604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.488614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.488628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.488637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.488651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.488661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.488676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.488685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.488699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.488708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.488722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.488738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.488753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.488762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.488776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.488786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.488800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.488809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.642 [2024-12-09 10:09:07.489730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.642 [2024-12-09 10:09:07.489738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.489753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.489762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.489777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.489786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.489800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.489810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.489824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.489834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.489853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.489863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.489878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.489887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.489902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.489911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.489925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.489935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.489949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.489958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.489973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.489982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.489996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.643 [2024-12-09 10:09:07.490388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.643 [2024-12-09 10:09:07.490412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.643 [2024-12-09 10:09:07.490427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.644 [2024-12-09 10:09:07.490440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.644 [2024-12-09 10:09:07.490464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.644 [2024-12-09 10:09:07.490497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.490938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.490947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.491595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.491617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.491637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.491648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.491664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.491675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.491700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.491711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.491727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.491737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.491756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.491766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.644 [2024-12-09 10:09:07.491783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.644 [2024-12-09 10:09:07.491793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.491809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.491819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.491847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.491857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.491874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.491884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.491907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.491918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.491934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.491945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.491961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.491971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.491987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.491998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.645 [2024-12-09 10:09:07.492412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.645 [2024-12-09 10:09:07.492428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.492438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.492454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.492465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.492492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.492503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.492532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.492542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.492556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.492565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.492580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.492589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.492610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.492620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.646 [2024-12-09 10:09:07.497745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.646 [2024-12-09 10:09:07.497760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.497771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.497788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.497798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.498977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.498987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.499003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.499013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.499029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.499039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.499055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.499065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.499081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.499091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.499107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.647 [2024-12-09 10:09:07.499117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.647 [2024-12-09 10:09:07.499133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.648 [2024-12-09 10:09:07.499645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.648 [2024-12-09 10:09:07.499672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.648 [2024-12-09 10:09:07.499698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.648 [2024-12-09 10:09:07.499724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.648 [2024-12-09 10:09:07.499913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.648 [2024-12-09 10:09:07.499929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.499939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.499955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.499965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.499981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.499990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.500979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.500995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.501006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.501022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.501032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.501047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.501057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.501073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.501083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.501099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.501109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.501133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.501149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.501164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.501174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.501190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.501200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.501216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.501226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.501242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.501252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.501268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.501278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.649 [2024-12-09 10:09:07.501294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.649 [2024-12-09 10:09:07.501305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.501976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.501986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.502002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.502012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.502028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.502038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.502053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.502063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.650 [2024-12-09 10:09:07.502079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.650 [2024-12-09 10:09:07.502089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.502419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.502429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.503003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.503023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.503042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.503053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.503070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.503080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.503096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.503106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.503121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.503131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.503148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.503158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.503175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.503185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.503201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.503211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.503226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.503236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.503252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.503262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.503278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.503301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.651 [2024-12-09 10:09:07.503317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.651 [2024-12-09 10:09:07.503327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.503974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.503992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.504003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.504020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.504031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.504049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.504060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.504082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.504094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.504111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.504122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.504139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.504151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.504168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.504179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.504197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.504208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.504225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.652 [2024-12-09 10:09:07.504236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.652 [2024-12-09 10:09:07.504254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.653 [2024-12-09 10:09:07.504352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.653 [2024-12-09 10:09:07.504381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.653 [2024-12-09 10:09:07.504410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.653 [2024-12-09 10:09:07.504443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.504932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.504943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.505634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.505656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.505676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.505687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.505705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.505716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.505734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.505745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.505763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.505774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.505791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.505803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.505820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.653 [2024-12-09 10:09:07.505839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.653 [2024-12-09 10:09:07.505857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.505868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.505885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.505896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.505913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.505924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.505942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.505953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.505970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.505981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.505998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.654 [2024-12-09 10:09:07.506802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.654 [2024-12-09 10:09:07.506820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.506830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.506848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.506859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.506877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.506889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.506906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.506924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.506942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.506953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.506970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.506981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.506999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.507391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.507402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.655 [2024-12-09 10:09:07.508652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.655 [2024-12-09 10:09:07.508669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.508680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.508697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.508708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.508726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.508737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.508754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.508766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.508783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.508795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.508812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.508823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.508840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.508851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.508869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.508880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.508897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.508909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.508926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.508937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.508955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.508966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.508983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.508999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.656 [2024-12-09 10:09:07.509433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.656 [2024-12-09 10:09:07.509462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.656 [2024-12-09 10:09:07.509504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.656 [2024-12-09 10:09:07.509533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.656 [2024-12-09 10:09:07.509845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.656 [2024-12-09 10:09:07.509856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.509874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.509885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.509902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.509913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.509931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.509942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.509960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.509971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.509989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.510000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.510689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.510711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.510732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.510743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.510769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.510780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.510798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.510810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.510827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.510839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.510856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.510867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.510886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.510896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.510914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.510926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.510943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.510954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.510972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.510982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.657 [2024-12-09 10:09:07.511897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.657 [2024-12-09 10:09:07.511908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.511925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.511936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.511954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.511965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.511982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.511993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.512443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.512454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.658 [2024-12-09 10:09:07.513871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.658 [2024-12-09 10:09:07.513887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.513897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.513918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.513929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.513945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.513956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.513972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.513982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.513998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.659 [2024-12-09 10:09:07.514467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.659 [2024-12-09 10:09:07.514501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.659 [2024-12-09 10:09:07.514528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.659 [2024-12-09 10:09:07.514555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.514954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.514965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.515610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.515632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.515651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.515661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.515678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.515689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.515706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.515716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.515732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.659 [2024-12-09 10:09:07.515743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.659 [2024-12-09 10:09:07.515759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.515769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.515786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.515796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.515813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.515823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.515840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.515850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.515866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.515877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.515903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.515914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.515930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.515940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.515958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.515969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.515985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.515995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.660 [2024-12-09 10:09:07.516886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.660 [2024-12-09 10:09:07.516895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.516909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.516919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.516933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.516942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.516957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.516966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.516980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.516989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.517990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.517999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.661 [2024-12-09 10:09:07.518525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.661 [2024-12-09 10:09:07.518534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.518549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.518558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.518572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.518585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.518600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.518609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.518624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.518633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.518647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.662 [2024-12-09 10:09:07.523775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.662 [2024-12-09 10:09:07.523800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.662 [2024-12-09 10:09:07.523823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.662 [2024-12-09 10:09:07.523848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.523984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.523993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.524979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.524993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.525003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.525018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.525027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.525042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.525051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.525070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.525079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.525094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.662 [2024-12-09 10:09:07.525103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.662 [2024-12-09 10:09:07.525118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.525981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.525995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.526005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.526020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.526029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.526044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.526053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.526068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.526077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.526092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.526102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.526117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.526126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.663 [2024-12-09 10:09:07.526141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.663 [2024-12-09 10:09:07.526150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.526180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.526205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.526228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.526253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.526277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.526302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.526845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.526872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.526898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.526923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.526948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.526972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.526987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.664 [2024-12-09 10:09:07.527851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.664 [2024-12-09 10:09:07.527866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.527875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.527890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.527899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.527914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.527924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.527939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.527953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.527969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.527978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.527993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.665 [2024-12-09 10:09:07.528124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.665 [2024-12-09 10:09:07.528148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.665 [2024-12-09 10:09:07.528173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.665 [2024-12-09 10:09:07.528197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.528517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.528530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.665 [2024-12-09 10:09:07.529598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.665 [2024-12-09 10:09:07.529612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.529979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.529988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.530600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.530610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.531124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.531142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.531159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.531168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.531190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.531201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.531216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.531225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.531240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.531250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.666 [2024-12-09 10:09:07.531265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.666 [2024-12-09 10:09:07.531274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.531988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.531998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.667 [2024-12-09 10:09:07.532332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.667 [2024-12-09 10:09:07.532341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.668 [2024-12-09 10:09:07.532443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.668 [2024-12-09 10:09:07.532467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.668 [2024-12-09 10:09:07.532501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.668 [2024-12-09 10:09:07.532525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.532813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.532823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.533981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.533991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.534005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.534015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.534042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.534051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.534064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.534073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.534087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.668 [2024-12-09 10:09:07.534095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:58:34.668 [2024-12-09 10:09:07.534109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.534842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.534851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.669 [2024-12-09 10:09:07.535398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:34.669 [2024-12-09 10:09:07.535415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.535979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.535997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.670 [2024-12-09 10:09:07.536491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.670 [2024-12-09 10:09:07.536517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.670 [2024-12-09 10:09:07.536545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.670 [2024-12-09 10:09:07.536570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:34.670 [2024-12-09 10:09:07.536666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.670 [2024-12-09 10:09:07.536675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:07.536692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.671 [2024-12-09 10:09:07.536700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:07.536717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.671 [2024-12-09 10:09:07.536726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:07.536742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.671 [2024-12-09 10:09:07.536751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:07.536767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.671 [2024-12-09 10:09:07.536776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:07.536792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.671 [2024-12-09 10:09:07.536801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:07.536817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.671 [2024-12-09 10:09:07.536826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:07.536842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.671 [2024-12-09 10:09:07.536851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:07.536984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.671 [2024-12-09 10:09:07.536996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:58:34.671 10592.41 IOPS, 41.38 MiB/s [2024-12-09T10:09:41.715Z] 10131.87 IOPS, 39.58 MiB/s [2024-12-09T10:09:41.715Z] 9709.71 IOPS, 37.93 MiB/s [2024-12-09T10:09:41.715Z] 9321.32 IOPS, 36.41 MiB/s [2024-12-09T10:09:41.715Z] 8962.81 IOPS, 35.01 MiB/s [2024-12-09T10:09:41.715Z] 8630.85 IOPS, 33.71 MiB/s [2024-12-09T10:09:41.715Z] 8322.61 IOPS, 32.51 MiB/s [2024-12-09T10:09:41.715Z] 8184.59 IOPS, 31.97 MiB/s [2024-12-09T10:09:41.715Z] 8271.03 IOPS, 32.31 MiB/s [2024-12-09T10:09:41.715Z] 8374.68 IOPS, 32.71 MiB/s [2024-12-09T10:09:41.715Z] 8481.56 IOPS, 33.13 MiB/s [2024-12-09T10:09:41.715Z] 8572.88 IOPS, 33.49 MiB/s [2024-12-09T10:09:41.715Z] 8661.59 IOPS, 33.83 MiB/s [2024-12-09T10:09:41.715Z] [2024-12-09 10:09:20.704684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.704988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.704998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.671 [2024-12-09 10:09:20.705342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.671 [2024-12-09 10:09:20.705352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.705847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:34.672 [2024-12-09 10:09:20.705985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.705995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.672 [2024-12-09 10:09:20.706271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.672 [2024-12-09 10:09:20.706281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:34.673 [2024-12-09 10:09:20.706904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.673 [2024-12-09 10:09:20.706939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68000 len:8 PRP1 0x0 PRP2 0x0 00:58:34.673 [2024-12-09 10:09:20.706947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.673 [2024-12-09 10:09:20.706967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.673 [2024-12-09 10:09:20.706973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68008 len:8 PRP1 0x0 PRP2 0x0 00:58:34.673 [2024-12-09 10:09:20.706982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.706990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.673 [2024-12-09 10:09:20.707001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.673 [2024-12-09 10:09:20.707008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68016 len:8 PRP1 0x0 PRP2 0x0 00:58:34.673 [2024-12-09 10:09:20.707017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.707025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.673 [2024-12-09 10:09:20.707031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.673 [2024-12-09 10:09:20.707038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68024 len:8 PRP1 0x0 PRP2 0x0 00:58:34.673 [2024-12-09 10:09:20.707046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.707055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.673 [2024-12-09 10:09:20.707061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.673 [2024-12-09 10:09:20.707067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68032 len:8 PRP1 0x0 PRP2 0x0 00:58:34.673 [2024-12-09 10:09:20.707075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.707084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.673 [2024-12-09 10:09:20.707090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.673 [2024-12-09 10:09:20.707098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68040 len:8 PRP1 0x0 PRP2 0x0 00:58:34.673 [2024-12-09 10:09:20.707106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.707127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.673 [2024-12-09 10:09:20.707133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.673 [2024-12-09 10:09:20.707139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68048 len:8 PRP1 0x0 PRP2 0x0 00:58:34.673 [2024-12-09 10:09:20.707146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.707154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.673 [2024-12-09 10:09:20.707160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.673 [2024-12-09 10:09:20.707166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68056 len:8 PRP1 0x0 PRP2 0x0 00:58:34.673 [2024-12-09 10:09:20.707174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.673 [2024-12-09 10:09:20.707184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.673 [2024-12-09 10:09:20.707189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.674 [2024-12-09 10:09:20.707195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68064 len:8 PRP1 0x0 PRP2 0x0 00:58:34.674 [2024-12-09 10:09:20.707204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.674 [2024-12-09 10:09:20.707212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.674 [2024-12-09 10:09:20.707217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.674 [2024-12-09 10:09:20.707223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68072 len:8 PRP1 0x0 PRP2 0x0 00:58:34.674 [2024-12-09 10:09:20.707230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.674 [2024-12-09 10:09:20.707241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.674 [2024-12-09 10:09:20.707248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.674 [2024-12-09 10:09:20.707254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68080 len:8 PRP1 0x0 PRP2 0x0 00:58:34.674 [2024-12-09 10:09:20.707261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.674 [2024-12-09 10:09:20.707269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.674 [2024-12-09 10:09:20.707275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.674 [2024-12-09 10:09:20.707281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68088 len:8 PRP1 0x0 PRP2 0x0 00:58:34.674 [2024-12-09 10:09:20.707299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.674 [2024-12-09 10:09:20.707307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.674 [2024-12-09 10:09:20.707330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.674 [2024-12-09 10:09:20.707336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68096 len:8 PRP1 0x0 PRP2 0x0 00:58:34.674 [2024-12-09 10:09:20.707344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.674 [2024-12-09 10:09:20.707353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.674 [2024-12-09 10:09:20.707359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.674 [2024-12-09 10:09:20.707367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68104 len:8 PRP1 0x0 PRP2 0x0 00:58:34.674 [2024-12-09 10:09:20.707375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.674 [2024-12-09 10:09:20.727927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.674 [2024-12-09 10:09:20.727967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.674 [2024-12-09 10:09:20.727984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68112 len:8 PRP1 0x0 PRP2 0x0 00:58:34.674 [2024-12-09 10:09:20.728001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.674 [2024-12-09 10:09:20.728017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:34.674 [2024-12-09 10:09:20.728027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:34.674 [2024-12-09 10:09:20.728038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68120 len:8 PRP1 0x0 PRP2 0x0 00:58:34.674 [2024-12-09 10:09:20.728053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.674 [2024-12-09 10:09:20.728248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:58:34.674 [2024-12-09 10:09:20.728273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.674 [2024-12-09 10:09:20.728290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:58:34.674 [2024-12-09 10:09:20.728305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.674 [2024-12-09 10:09:20.728320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:58:34.674 [2024-12-09 10:09:20.728334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.674 [2024-12-09 10:09:20.728365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:58:34.674 [2024-12-09 10:09:20.728379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:34.674 [2024-12-09 10:09:20.728394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abed90 is same with the state(6) to be set 00:58:34.674 [2024-12-09 10:09:20.730157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:58:34.674 [2024-12-09 10:09:20.730201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abed90 (9): Bad file descriptor 00:58:34.674 [2024-12-09 10:09:20.730371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:34.674 [2024-12-09 10:09:20.730399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abed90 with addr=10.0.0.3, port=4421 00:58:34.674 [2024-12-09 10:09:20.730415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abed90 is same with the state(6) to be set 00:58:34.674 [2024-12-09 10:09:20.730599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abed90 (9): Bad file descriptor 00:58:34.674 [2024-12-09 10:09:20.730743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:58:34.674 [2024-12-09 10:09:20.730762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:58:34.674 [2024-12-09 10:09:20.730779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:58:34.674 [2024-12-09 10:09:20.730793] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:58:34.674 [2024-12-09 10:09:20.730808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:58:34.674 8732.34 IOPS, 34.11 MiB/s [2024-12-09T10:09:41.718Z] 8809.78 IOPS, 34.41 MiB/s [2024-12-09T10:09:41.718Z] 8874.14 IOPS, 34.66 MiB/s [2024-12-09T10:09:41.718Z] 8945.29 IOPS, 34.94 MiB/s [2024-12-09T10:09:41.718Z] 9010.44 IOPS, 35.20 MiB/s [2024-12-09T10:09:41.718Z] 9074.23 IOPS, 35.45 MiB/s [2024-12-09T10:09:41.718Z] 9135.00 IOPS, 35.68 MiB/s [2024-12-09T10:09:41.718Z] 9190.17 IOPS, 35.90 MiB/s [2024-12-09T10:09:41.718Z] 9237.79 IOPS, 36.09 MiB/s [2024-12-09T10:09:41.718Z] 9289.34 IOPS, 36.29 MiB/s [2024-12-09T10:09:41.718Z] [2024-12-09 10:09:30.758771] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:58:34.674 9344.33 IOPS, 36.50 MiB/s [2024-12-09T10:09:41.718Z] 9391.04 IOPS, 36.68 MiB/s [2024-12-09T10:09:41.718Z] 9437.32 IOPS, 36.86 MiB/s [2024-12-09T10:09:41.718Z] 9483.50 IOPS, 37.04 MiB/s [2024-12-09T10:09:41.718Z] 9524.51 IOPS, 37.21 MiB/s [2024-12-09T10:09:41.718Z] 9562.52 IOPS, 37.35 MiB/s [2024-12-09T10:09:41.718Z] 9601.76 IOPS, 37.51 MiB/s [2024-12-09T10:09:41.718Z] 9637.75 IOPS, 37.65 MiB/s [2024-12-09T10:09:41.718Z] 9672.36 IOPS, 37.78 MiB/s [2024-12-09T10:09:41.718Z] 9705.24 IOPS, 37.91 MiB/s [2024-12-09T10:09:41.718Z] Received shutdown signal, test time was about 54.828800 seconds 00:58:34.674 00:58:34.674 Latency(us) 00:58:34.674 [2024-12-09T10:09:41.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:34.674 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:58:34.674 Verification LBA range: start 0x0 length 0x4000 00:58:34.674 Nvme0n1 : 54.83 9728.44 38.00 0.00 0.00 13137.35 1416.61 7091853.75 00:58:34.674 [2024-12-09T10:09:41.718Z] =================================================================================================================== 00:58:34.674 [2024-12-09T10:09:41.718Z] Total : 9728.44 38.00 0.00 0.00 13137.35 1416.61 7091853.75 00:58:34.674 10:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:58:34.674 rmmod nvme_tcp 00:58:34.674 rmmod nvme_fabrics 00:58:34.674 rmmod nvme_keyring 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 96633 ']' 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 96633 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 96633 ']' 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 96633 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96633 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:34.674 killing process with pid 96633 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96633' 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 96633 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 96633 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:58:34.674 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:58:34.936 00:58:34.936 real 1m0.630s 00:58:34.936 user 2m53.492s 00:58:34.936 sys 0m11.631s 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:58:34.936 ************************************ 00:58:34.936 END TEST nvmf_host_multipath 00:58:34.936 ************************************ 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:58:34.936 ************************************ 00:58:34.936 START TEST nvmf_timeout 00:58:34.936 ************************************ 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:58:34.936 * Looking for test storage... 00:58:34.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:58:34.936 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:58:35.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:35.197 --rc genhtml_branch_coverage=1 00:58:35.197 --rc genhtml_function_coverage=1 00:58:35.197 --rc genhtml_legend=1 00:58:35.197 --rc geninfo_all_blocks=1 00:58:35.197 --rc geninfo_unexecuted_blocks=1 00:58:35.197 00:58:35.197 ' 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:58:35.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:35.197 --rc genhtml_branch_coverage=1 00:58:35.197 --rc genhtml_function_coverage=1 00:58:35.197 --rc genhtml_legend=1 00:58:35.197 --rc geninfo_all_blocks=1 00:58:35.197 --rc geninfo_unexecuted_blocks=1 00:58:35.197 00:58:35.197 ' 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:58:35.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:35.197 --rc genhtml_branch_coverage=1 00:58:35.197 --rc genhtml_function_coverage=1 00:58:35.197 --rc genhtml_legend=1 00:58:35.197 --rc geninfo_all_blocks=1 00:58:35.197 --rc geninfo_unexecuted_blocks=1 00:58:35.197 00:58:35.197 ' 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:58:35.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:35.197 --rc genhtml_branch_coverage=1 00:58:35.197 --rc genhtml_function_coverage=1 00:58:35.197 --rc genhtml_legend=1 00:58:35.197 --rc geninfo_all_blocks=1 00:58:35.197 --rc geninfo_unexecuted_blocks=1 00:58:35.197 00:58:35.197 ' 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:35.197 10:09:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:35.197 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:35.198 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:58:35.198 Cannot find device "nvmf_init_br" 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:58:35.198 Cannot find device "nvmf_init_br2" 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:58:35.198 Cannot find device "nvmf_tgt_br" 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:58:35.198 Cannot find device "nvmf_tgt_br2" 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:58:35.198 Cannot find device "nvmf_init_br" 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:58:35.198 Cannot find device "nvmf_init_br2" 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:58:35.198 Cannot find device "nvmf_tgt_br" 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:58:35.198 Cannot find device "nvmf_tgt_br2" 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:58:35.198 Cannot find device "nvmf_br" 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:58:35.198 Cannot find device "nvmf_init_if" 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:58:35.198 Cannot find device "nvmf_init_if2" 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:58:35.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:58:35.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:58:35.198 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:58:35.458 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:58:35.458 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:58:35.458 00:58:35.458 --- 10.0.0.3 ping statistics --- 00:58:35.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:35.458 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:58:35.458 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:58:35.458 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:58:35.458 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.108 ms 00:58:35.458 00:58:35.458 --- 10.0.0.4 ping statistics --- 00:58:35.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:35.458 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:58:35.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:35.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:58:35.459 00:58:35.459 --- 10.0.0.1 ping statistics --- 00:58:35.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:35.459 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:58:35.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:35.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:58:35.459 00:58:35.459 --- 10.0.0.2 ping statistics --- 00:58:35.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:35.459 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=98055 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 98055 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98055 ']' 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:35.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:35.459 10:09:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:58:35.718 [2024-12-09 10:09:42.505587] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:58:35.718 [2024-12-09 10:09:42.505655] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:35.718 [2024-12-09 10:09:42.655926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:58:35.718 [2024-12-09 10:09:42.706399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:35.718 [2024-12-09 10:09:42.706446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:35.718 [2024-12-09 10:09:42.706453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:35.718 [2024-12-09 10:09:42.706458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:35.718 [2024-12-09 10:09:42.706462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:35.718 [2024-12-09 10:09:42.707270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:35.718 [2024-12-09 10:09:42.707272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:36.657 10:09:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:36.657 10:09:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:58:36.657 10:09:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:58:36.657 10:09:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:36.657 10:09:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:58:36.657 10:09:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:36.657 10:09:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:58:36.657 10:09:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:58:36.657 [2024-12-09 10:09:43.652808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:36.657 10:09:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:58:36.917 Malloc0 00:58:36.917 10:09:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:58:37.176 10:09:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:58:37.435 10:09:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:58:37.695 [2024-12-09 10:09:44.507817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:58:37.695 10:09:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=98142 00:58:37.695 10:09:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:58:37.695 10:09:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 98142 /var/tmp/bdevperf.sock 00:58:37.696 10:09:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98142 ']' 00:58:37.696 10:09:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:58:37.696 10:09:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:37.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:58:37.696 10:09:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:58:37.696 10:09:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:37.696 10:09:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:58:37.696 [2024-12-09 10:09:44.586301] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:58:37.696 [2024-12-09 10:09:44.586369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98142 ] 00:58:37.696 [2024-12-09 10:09:44.719850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:37.955 [2024-12-09 10:09:44.768249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:58:38.528 10:09:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:38.528 10:09:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:58:38.528 10:09:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:58:38.786 10:09:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:58:39.045 NVMe0n1 00:58:39.045 10:09:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=98184 00:58:39.045 10:09:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:58:39.045 10:09:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:58:39.045 Running I/O for 10 seconds... 00:58:39.984 10:09:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:58:40.247 11791.00 IOPS, 46.06 MiB/s [2024-12-09T10:09:47.291Z] [2024-12-09 10:09:47.138928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.138973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.138981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.138987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.138993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.138998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.247 [2024-12-09 10:09:47.139287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.139383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9120 is same with the state(6) to be set 00:58:40.248 [2024-12-09 10:09:47.140978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:40.248 [2024-12-09 10:09:47.141017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.248 [2024-12-09 10:09:47.141562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.248 [2024-12-09 10:09:47.141568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:40.249 [2024-12-09 10:09:47.141763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:40.249 [2024-12-09 10:09:47.141782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:40.249 [2024-12-09 10:09:47.141795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:40.249 [2024-12-09 10:09:47.141809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:40.249 [2024-12-09 10:09:47.141827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:40.249 [2024-12-09 10:09:47.141840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:40.249 [2024-12-09 10:09:47.141859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:40.249 [2024-12-09 10:09:47.141874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:40.249 [2024-12-09 10:09:47.141887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.141992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.141999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.249 [2024-12-09 10:09:47.142206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.249 [2024-12-09 10:09:47.142214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:40.250 [2024-12-09 10:09:47.142629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.250 [2024-12-09 10:09:47.142664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105328 len:8 PRP1 0x0 PRP2 0x0 00:58:40.250 [2024-12-09 10:09:47.142670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.250 [2024-12-09 10:09:47.142689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.250 [2024-12-09 10:09:47.142694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105336 len:8 PRP1 0x0 PRP2 0x0 00:58:40.250 [2024-12-09 10:09:47.142700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.250 [2024-12-09 10:09:47.142711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.250 [2024-12-09 10:09:47.142720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105344 len:8 PRP1 0x0 PRP2 0x0 00:58:40.250 [2024-12-09 10:09:47.142726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.250 [2024-12-09 10:09:47.142736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.250 [2024-12-09 10:09:47.142741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105352 len:8 PRP1 0x0 PRP2 0x0 00:58:40.250 [2024-12-09 10:09:47.142747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.250 [2024-12-09 10:09:47.142761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.250 [2024-12-09 10:09:47.142767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105360 len:8 PRP1 0x0 PRP2 0x0 00:58:40.250 [2024-12-09 10:09:47.142773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.250 [2024-12-09 10:09:47.142785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.250 [2024-12-09 10:09:47.142790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105368 len:8 PRP1 0x0 PRP2 0x0 00:58:40.250 [2024-12-09 10:09:47.142796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.250 [2024-12-09 10:09:47.142807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.250 [2024-12-09 10:09:47.142815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105376 len:8 PRP1 0x0 PRP2 0x0 00:58:40.250 [2024-12-09 10:09:47.142821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.250 [2024-12-09 10:09:47.142832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.250 [2024-12-09 10:09:47.142838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105384 len:8 PRP1 0x0 PRP2 0x0 00:58:40.250 [2024-12-09 10:09:47.142843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.250 [2024-12-09 10:09:47.142856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.250 [2024-12-09 10:09:47.142861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.142866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105392 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.142873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.142879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.142889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.142894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105400 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.142900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.142906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.142914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.142919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105408 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.142925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.142932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.142936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.142941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105416 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.142947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.142961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.142966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.142971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105424 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.142977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.142983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.142988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.142996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105432 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.143003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.143009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.143014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.143019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105440 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.143024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.143031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.143040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.143045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105448 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.143051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.143058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.143062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.143067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105456 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.143073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.143083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.143087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.143092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105464 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.143098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.143106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.143111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.143116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105472 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.143126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.143132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.143136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.143141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105480 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.143147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.143153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.143159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.143167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105488 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.143173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 10:09:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:58:40.251 [2024-12-09 10:09:47.161615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.161659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.161675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105496 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.161688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.161699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.161707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.161716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105504 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.161726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.161738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.161746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.161754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104568 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.161764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.161774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.161782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.161790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104576 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.161800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.161810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.161817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.161825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104584 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.161835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.161846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.161854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.161862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104592 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.161872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.161882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.251 [2024-12-09 10:09:47.161889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.251 [2024-12-09 10:09:47.161898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104600 len:8 PRP1 0x0 PRP2 0x0 00:58:40.251 [2024-12-09 10:09:47.161907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.251 [2024-12-09 10:09:47.161917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.252 [2024-12-09 10:09:47.161924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.252 [2024-12-09 10:09:47.161933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104608 len:8 PRP1 0x0 PRP2 0x0 00:58:40.252 [2024-12-09 10:09:47.161942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.252 [2024-12-09 10:09:47.161955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:40.252 [2024-12-09 10:09:47.161962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:40.252 [2024-12-09 10:09:47.161971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104616 len:8 PRP1 0x0 PRP2 0x0 00:58:40.252 [2024-12-09 10:09:47.161981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.252 [2024-12-09 10:09:47.162172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:58:40.252 [2024-12-09 10:09:47.162189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.252 [2024-12-09 10:09:47.162202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:58:40.252 [2024-12-09 10:09:47.162212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.252 [2024-12-09 10:09:47.162223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:58:40.252 [2024-12-09 10:09:47.162233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.252 [2024-12-09 10:09:47.162245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:58:40.252 [2024-12-09 10:09:47.162254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:40.252 [2024-12-09 10:09:47.162264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2160f30 is same with the state(6) to be set 00:58:40.252 [2024-12-09 10:09:47.162640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:58:40.252 [2024-12-09 10:09:47.162677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2160f30 (9): Bad file descriptor 00:58:40.252 [2024-12-09 10:09:47.162796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:40.252 [2024-12-09 10:09:47.162826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2160f30 with addr=10.0.0.3, port=4420 00:58:40.252 [2024-12-09 10:09:47.162838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2160f30 is same with the state(6) to be set 00:58:40.252 [2024-12-09 10:09:47.162857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2160f30 (9): Bad file descriptor 00:58:40.252 [2024-12-09 10:09:47.162875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:58:40.252 [2024-12-09 10:09:47.162884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:58:40.252 [2024-12-09 10:09:47.162895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:58:40.252 [2024-12-09 10:09:47.162906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:58:40.252 [2024-12-09 10:09:47.162917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:58:42.131 6530.50 IOPS, 25.51 MiB/s [2024-12-09T10:09:49.175Z] 4353.67 IOPS, 17.01 MiB/s [2024-12-09T10:09:49.175Z] [2024-12-09 10:09:49.159238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:42.131 [2024-12-09 10:09:49.159282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2160f30 with addr=10.0.0.3, port=4420 00:58:42.131 [2024-12-09 10:09:49.159298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2160f30 is same with the state(6) to be set 00:58:42.131 [2024-12-09 10:09:49.159317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2160f30 (9): Bad file descriptor 00:58:42.131 [2024-12-09 10:09:49.159330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:58:42.131 [2024-12-09 10:09:49.159337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:58:42.131 [2024-12-09 10:09:49.159344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:58:42.131 [2024-12-09 10:09:49.159352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:58:42.131 [2024-12-09 10:09:49.159360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:58:42.131 10:09:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:58:42.131 10:09:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:58:42.131 10:09:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:58:42.390 10:09:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:58:42.390 10:09:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:58:42.390 10:09:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:58:42.390 10:09:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:58:42.650 10:09:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:58:42.650 10:09:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:58:44.158 3265.25 IOPS, 12.75 MiB/s [2024-12-09T10:09:51.202Z] 2612.20 IOPS, 10.20 MiB/s [2024-12-09T10:09:51.202Z] [2024-12-09 10:09:51.155754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.158 [2024-12-09 10:09:51.155807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2160f30 with addr=10.0.0.3, port=4420 00:58:44.158 [2024-12-09 10:09:51.155818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2160f30 is same with the state(6) to be set 00:58:44.159 [2024-12-09 10:09:51.155837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2160f30 (9): Bad file descriptor 00:58:44.159 [2024-12-09 10:09:51.155850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:58:44.159 [2024-12-09 10:09:51.155857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:58:44.159 [2024-12-09 10:09:51.155865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:58:44.159 [2024-12-09 10:09:51.155873] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:58:44.159 [2024-12-09 10:09:51.155882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:58:46.043 2176.83 IOPS, 8.50 MiB/s [2024-12-09T10:09:53.347Z] 1865.86 IOPS, 7.29 MiB/s [2024-12-09T10:09:53.347Z] [2024-12-09 10:09:53.152219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:58:46.303 [2024-12-09 10:09:53.152264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:58:46.303 [2024-12-09 10:09:53.152272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:58:46.303 [2024-12-09 10:09:53.152279] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:58:46.303 [2024-12-09 10:09:53.152288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:58:47.264 1632.62 IOPS, 6.38 MiB/s 00:58:47.264 Latency(us) 00:58:47.264 [2024-12-09T10:09:54.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:47.264 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:58:47.264 Verification LBA range: start 0x0 length 0x4000 00:58:47.264 NVMe0n1 : 8.13 1606.60 6.28 15.74 0.00 78793.31 1538.24 7033243.39 00:58:47.264 [2024-12-09T10:09:54.308Z] =================================================================================================================== 00:58:47.264 [2024-12-09T10:09:54.308Z] Total : 1606.60 6.28 15.74 0.00 78793.31 1538.24 7033243.39 00:58:47.264 { 00:58:47.264 "results": [ 00:58:47.264 { 00:58:47.264 "job": "NVMe0n1", 00:58:47.264 "core_mask": "0x4", 00:58:47.264 "workload": "verify", 00:58:47.264 "status": "finished", 00:58:47.264 "verify_range": { 00:58:47.264 "start": 0, 00:58:47.264 "length": 16384 00:58:47.264 }, 00:58:47.264 "queue_depth": 128, 00:58:47.264 "io_size": 4096, 00:58:47.264 "runtime": 8.129595, 00:58:47.264 "iops": 1606.5990987250902, 00:58:47.264 "mibps": 6.275777729394884, 00:58:47.264 "io_failed": 128, 00:58:47.264 "io_timeout": 0, 00:58:47.264 "avg_latency_us": 78793.30578949442, 00:58:47.264 "min_latency_us": 1538.235807860262, 00:58:47.264 "max_latency_us": 7033243.388646288 00:58:47.264 } 00:58:47.264 ], 00:58:47.264 "core_count": 1 00:58:47.264 } 00:58:47.833 10:09:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:58:47.833 10:09:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:58:47.833 10:09:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:58:47.833 10:09:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:58:47.833 10:09:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:58:47.833 10:09:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:58:47.833 10:09:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:58:48.093 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:58:48.093 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 98184 00:58:48.093 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 98142 00:58:48.094 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98142 ']' 00:58:48.094 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98142 00:58:48.094 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:58:48.094 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:48.094 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98142 00:58:48.094 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:58:48.094 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:58:48.094 killing process with pid 98142 00:58:48.094 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98142' 00:58:48.094 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98142 00:58:48.094 Received shutdown signal, test time was about 9.060983 seconds 00:58:48.094 00:58:48.094 Latency(us) 00:58:48.094 [2024-12-09T10:09:55.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:48.094 [2024-12-09T10:09:55.138Z] =================================================================================================================== 00:58:48.094 [2024-12-09T10:09:55.138Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:58:48.094 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98142 00:58:48.357 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:58:48.619 [2024-12-09 10:09:55.412357] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:58:48.619 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=98347 00:58:48.619 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:58:48.619 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 98347 /var/tmp/bdevperf.sock 00:58:48.619 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98347 ']' 00:58:48.619 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:58:48.619 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:48.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:58:48.619 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:58:48.619 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:48.619 10:09:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:58:48.619 [2024-12-09 10:09:55.501767] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:58:48.619 [2024-12-09 10:09:55.501865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98347 ] 00:58:48.619 [2024-12-09 10:09:55.633969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:48.878 [2024-12-09 10:09:55.686291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:58:49.447 10:09:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:49.447 10:09:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:58:49.447 10:09:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:58:49.707 10:09:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:58:49.966 NVMe0n1 00:58:49.966 10:09:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:58:49.966 10:09:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=98390 00:58:49.966 10:09:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:58:50.225 Running I/O for 10 seconds... 00:58:51.168 10:09:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:58:51.168 11683.00 IOPS, 45.64 MiB/s [2024-12-09T10:09:58.212Z] [2024-12-09 10:09:58.164094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.168 [2024-12-09 10:09:58.164228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.164613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd216b0 is same with the state(6) to be set 00:58:51.169 [2024-12-09 10:09:58.165200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.169 [2024-12-09 10:09:58.165236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.169 [2024-12-09 10:09:58.165253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.169 [2024-12-09 10:09:58.165261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.169 [2024-12-09 10:09:58.165269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.169 [2024-12-09 10:09:58.165275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.169 [2024-12-09 10:09:58.165282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.169 [2024-12-09 10:09:58.165288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.170 [2024-12-09 10:09:58.165855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.170 [2024-12-09 10:09:58.165864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.165871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.165876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.165888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.165894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.165901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.165906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.165913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.165918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.165925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.165930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.165937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.165942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.165955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.165960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.165967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.165973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.165980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.165985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.165992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.165998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.171 [2024-12-09 10:09:58.166466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.171 [2024-12-09 10:09:58.166481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.172 [2024-12-09 10:09:58.166488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.172 [2024-12-09 10:09:58.166506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.172 [2024-12-09 10:09:58.166518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.172 [2024-12-09 10:09:58.166530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.172 [2024-12-09 10:09:58.166542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.172 [2024-12-09 10:09:58.166569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.172 [2024-12-09 10:09:58.166587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.172 [2024-12-09 10:09:58.166600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.172 [2024-12-09 10:09:58.166612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:51.172 [2024-12-09 10:09:58.166625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.166994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.166999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.167005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.167011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.167022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.167029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.167035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.167041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.167052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.167058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.167065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.167070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.167082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.172 [2024-12-09 10:09:58.167087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.172 [2024-12-09 10:09:58.167094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.173 [2024-12-09 10:09:58.167100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.173 [2024-12-09 10:09:58.167108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.173 [2024-12-09 10:09:58.167118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.173 [2024-12-09 10:09:58.167126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:51.173 [2024-12-09 10:09:58.167132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.173 [2024-12-09 10:09:58.167153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:51.173 [2024-12-09 10:09:58.167159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:51.173 [2024-12-09 10:09:58.167163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105888 len:8 PRP1 0x0 PRP2 0x0 00:58:51.173 [2024-12-09 10:09:58.167174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:51.173 [2024-12-09 10:09:58.167461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:58:51.173 [2024-12-09 10:09:58.167610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d4f30 (9): Bad file descriptor 00:58:51.173 [2024-12-09 10:09:58.167681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:51.173 [2024-12-09 10:09:58.167694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d4f30 with addr=10.0.0.3, port=4420 00:58:51.173 [2024-12-09 10:09:58.167701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4f30 is same with the state(6) to be set 00:58:51.173 [2024-12-09 10:09:58.167712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d4f30 (9): Bad file descriptor 00:58:51.173 [2024-12-09 10:09:58.167723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:58:51.173 [2024-12-09 10:09:58.167730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:58:51.173 [2024-12-09 10:09:58.167738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:58:51.173 [2024-12-09 10:09:58.167746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:58:51.173 [2024-12-09 10:09:58.167753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:58:51.173 10:09:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:58:52.372 6554.50 IOPS, 25.60 MiB/s [2024-12-09T10:09:59.416Z] [2024-12-09 10:09:59.165945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:52.372 [2024-12-09 10:09:59.166007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d4f30 with addr=10.0.0.3, port=4420 00:58:52.372 [2024-12-09 10:09:59.166034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4f30 is same with the state(6) to be set 00:58:52.372 [2024-12-09 10:09:59.166054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d4f30 (9): Bad file descriptor 00:58:52.372 [2024-12-09 10:09:59.166068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:58:52.372 [2024-12-09 10:09:59.166074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:58:52.372 [2024-12-09 10:09:59.166082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:58:52.372 [2024-12-09 10:09:59.166090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:58:52.372 [2024-12-09 10:09:59.166098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:58:52.372 10:09:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:58:52.372 [2024-12-09 10:09:59.394662] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:58:52.632 10:09:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 98390 00:58:53.202 4369.67 IOPS, 17.07 MiB/s [2024-12-09T10:10:00.246Z] [2024-12-09 10:10:00.179171] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:58:55.076 3277.25 IOPS, 12.80 MiB/s [2024-12-09T10:10:03.059Z] 4724.00 IOPS, 18.45 MiB/s [2024-12-09T10:10:04.439Z] 5956.33 IOPS, 23.27 MiB/s [2024-12-09T10:10:05.382Z] 6841.14 IOPS, 26.72 MiB/s [2024-12-09T10:10:06.319Z] 7511.12 IOPS, 29.34 MiB/s [2024-12-09T10:10:07.274Z] 8033.44 IOPS, 31.38 MiB/s [2024-12-09T10:10:07.274Z] 8441.80 IOPS, 32.98 MiB/s 00:59:00.230 Latency(us) 00:59:00.230 [2024-12-09T10:10:07.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:59:00.230 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:59:00.230 Verification LBA range: start 0x0 length 0x4000 00:59:00.230 NVMe0n1 : 10.01 8447.83 33.00 0.00 0.00 15125.62 1209.12 3018433.62 00:59:00.230 [2024-12-09T10:10:07.274Z] =================================================================================================================== 00:59:00.230 [2024-12-09T10:10:07.274Z] Total : 8447.83 33.00 0.00 0.00 15125.62 1209.12 3018433.62 00:59:00.230 { 00:59:00.230 "results": [ 00:59:00.230 { 00:59:00.230 "job": "NVMe0n1", 00:59:00.230 "core_mask": "0x4", 00:59:00.230 "workload": "verify", 00:59:00.230 "status": "finished", 00:59:00.230 "verify_range": { 00:59:00.230 "start": 0, 00:59:00.230 "length": 16384 00:59:00.230 }, 00:59:00.230 "queue_depth": 128, 00:59:00.230 "io_size": 4096, 00:59:00.230 "runtime": 10.006002, 00:59:00.230 "iops": 8447.829612666477, 00:59:00.230 "mibps": 32.99933442447843, 00:59:00.230 "io_failed": 0, 00:59:00.230 "io_timeout": 0, 00:59:00.230 "avg_latency_us": 15125.620755833725, 00:59:00.230 "min_latency_us": 1209.1248908296943, 00:59:00.230 "max_latency_us": 3018433.6209606985 00:59:00.230 } 00:59:00.230 ], 00:59:00.230 "core_count": 1 00:59:00.230 } 00:59:00.230 10:10:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=98507 00:59:00.230 10:10:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:59:00.230 10:10:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:59:00.230 Running I/O for 10 seconds... 00:59:01.169 10:10:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:59:01.432 11588.00 IOPS, 45.27 MiB/s [2024-12-09T10:10:08.476Z] [2024-12-09 10:10:08.252076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.432 [2024-12-09 10:10:08.252681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.433 [2024-12-09 10:10:08.252687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.433 [2024-12-09 10:10:08.252692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.433 [2024-12-09 10:10:08.252698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.433 [2024-12-09 10:10:08.252702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.433 [2024-12-09 10:10:08.252707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fd40 is same with the state(6) to be set 00:59:01.433 [2024-12-09 10:10:08.253663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.253989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.253996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.254001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.254013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.254026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.254038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:01.433 [2024-12-09 10:10:08.254050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.433 [2024-12-09 10:10:08.254062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.433 [2024-12-09 10:10:08.254074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.433 [2024-12-09 10:10:08.254087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.433 [2024-12-09 10:10:08.254099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.433 [2024-12-09 10:10:08.254112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.433 [2024-12-09 10:10:08.254124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.433 [2024-12-09 10:10:08.254138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.433 [2024-12-09 10:10:08.254149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.433 [2024-12-09 10:10:08.254156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.433 [2024-12-09 10:10:08.254161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.434 [2024-12-09 10:10:08.254677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.434 [2024-12-09 10:10:08.254684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.254979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.254985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.255020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.255027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.255032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.255039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.255045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.255052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.255057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.255064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.255072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.255078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:59:01.435 [2024-12-09 10:10:08.255084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.255109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.435 [2024-12-09 10:10:08.255115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103376 len:8 PRP1 0x0 PRP2 0x0 00:59:01.435 [2024-12-09 10:10:08.255120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.255140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.435 [2024-12-09 10:10:08.255144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.435 [2024-12-09 10:10:08.255149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103384 len:8 PRP1 0x0 PRP2 0x0 00:59:01.435 [2024-12-09 10:10:08.255154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.255160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.435 [2024-12-09 10:10:08.255164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.435 [2024-12-09 10:10:08.255168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103392 len:8 PRP1 0x0 PRP2 0x0 00:59:01.435 [2024-12-09 10:10:08.255173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.255179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.435 [2024-12-09 10:10:08.255183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.435 [2024-12-09 10:10:08.255187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103400 len:8 PRP1 0x0 PRP2 0x0 00:59:01.435 [2024-12-09 10:10:08.255192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.255197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.435 [2024-12-09 10:10:08.255204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.435 [2024-12-09 10:10:08.255209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103408 len:8 PRP1 0x0 PRP2 0x0 00:59:01.435 [2024-12-09 10:10:08.255214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.435 [2024-12-09 10:10:08.255219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.435 [2024-12-09 10:10:08.255223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.435 [2024-12-09 10:10:08.255228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103416 len:8 PRP1 0x0 PRP2 0x0 00:59:01.435 [2024-12-09 10:10:08.255233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.255238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.255242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.255247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103424 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.255252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.255257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.255261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.255266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103432 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.255272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.255277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.255281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.255302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103440 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.255308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.255313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.255317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.255321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103448 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.255327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.255332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.255336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.255340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103456 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.255345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.255351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.255355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.255360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103464 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.255365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.255370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.255374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.255380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103472 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.255385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.255390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.255394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.255398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103480 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.255403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.255409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.255413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.255417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103488 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.255422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.255428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.255432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.275441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103496 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.275489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.275506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.275513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.275520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103504 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.275529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.275538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.275544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.275550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103512 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.275558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.275566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:01.436 [2024-12-09 10:10:08.275572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:01.436 [2024-12-09 10:10:08.275578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103520 len:8 PRP1 0x0 PRP2 0x0 00:59:01.436 [2024-12-09 10:10:08.275586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.275750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:59:01.436 [2024-12-09 10:10:08.275765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.275775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:59:01.436 [2024-12-09 10:10:08.275783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.275792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:59:01.436 [2024-12-09 10:10:08.275800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.275808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:59:01.436 [2024-12-09 10:10:08.275816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:01.436 [2024-12-09 10:10:08.275824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4f30 is same with the state(6) to be set 00:59:01.436 [2024-12-09 10:10:08.276076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:59:01.436 [2024-12-09 10:10:08.276093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d4f30 (9): Bad file descriptor 00:59:01.436 [2024-12-09 10:10:08.276183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:01.436 [2024-12-09 10:10:08.276199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d4f30 with addr=10.0.0.3, port=4420 00:59:01.436 [2024-12-09 10:10:08.276207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4f30 is same with the state(6) to be set 00:59:01.436 [2024-12-09 10:10:08.276221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d4f30 (9): Bad file descriptor 00:59:01.436 [2024-12-09 10:10:08.276237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:59:01.436 [2024-12-09 10:10:08.276245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:59:01.436 [2024-12-09 10:10:08.276254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:59:01.436 [2024-12-09 10:10:08.276265] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:59:01.436 [2024-12-09 10:10:08.276274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:59:01.436 10:10:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:59:02.375 6406.50 IOPS, 25.03 MiB/s [2024-12-09T10:10:09.419Z] [2024-12-09 10:10:09.274455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:02.375 [2024-12-09 10:10:09.274527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d4f30 with addr=10.0.0.3, port=4420 00:59:02.375 [2024-12-09 10:10:09.274537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4f30 is same with the state(6) to be set 00:59:02.375 [2024-12-09 10:10:09.274554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d4f30 (9): Bad file descriptor 00:59:02.375 [2024-12-09 10:10:09.274569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:59:02.375 [2024-12-09 10:10:09.274575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:59:02.375 [2024-12-09 10:10:09.274582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:59:02.375 [2024-12-09 10:10:09.274590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:59:02.375 [2024-12-09 10:10:09.274598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:59:03.312 4271.00 IOPS, 16.68 MiB/s [2024-12-09T10:10:10.356Z] [2024-12-09 10:10:10.272777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:03.312 [2024-12-09 10:10:10.272828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d4f30 with addr=10.0.0.3, port=4420 00:59:03.312 [2024-12-09 10:10:10.272839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4f30 is same with the state(6) to be set 00:59:03.312 [2024-12-09 10:10:10.272857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d4f30 (9): Bad file descriptor 00:59:03.312 [2024-12-09 10:10:10.272870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:59:03.312 [2024-12-09 10:10:10.272876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:59:03.312 [2024-12-09 10:10:10.272884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:59:03.312 [2024-12-09 10:10:10.272892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:59:03.312 [2024-12-09 10:10:10.272900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:59:04.249 3203.25 IOPS, 12.51 MiB/s [2024-12-09T10:10:11.293Z] [2024-12-09 10:10:11.273589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:04.249 [2024-12-09 10:10:11.273679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d4f30 with addr=10.0.0.3, port=4420 00:59:04.249 [2024-12-09 10:10:11.273708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4f30 is same with the state(6) to be set 00:59:04.249 [2024-12-09 10:10:11.273909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d4f30 (9): Bad file descriptor 00:59:04.249 [2024-12-09 10:10:11.274082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:59:04.249 [2024-12-09 10:10:11.274089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:59:04.249 [2024-12-09 10:10:11.274096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:59:04.249 [2024-12-09 10:10:11.274104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:59:04.249 [2024-12-09 10:10:11.274112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:59:04.249 10:10:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:59:04.509 [2024-12-09 10:10:11.472410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:59:04.509 10:10:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 98507 00:59:05.447 2562.60 IOPS, 10.01 MiB/s [2024-12-09T10:10:12.491Z] [2024-12-09 10:10:12.299017] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:59:07.325 3822.00 IOPS, 14.93 MiB/s [2024-12-09T10:10:15.308Z] 4989.57 IOPS, 19.49 MiB/s [2024-12-09T10:10:16.267Z] 5868.25 IOPS, 22.92 MiB/s [2024-12-09T10:10:17.205Z] 6550.56 IOPS, 25.59 MiB/s [2024-12-09T10:10:17.205Z] 7114.50 IOPS, 27.79 MiB/s 00:59:10.161 Latency(us) 00:59:10.161 [2024-12-09T10:10:17.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:59:10.161 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:59:10.161 Verification LBA range: start 0x0 length 0x4000 00:59:10.161 NVMe0n1 : 10.01 7122.90 27.82 5175.01 0.00 10385.68 465.05 3033086.21 00:59:10.161 [2024-12-09T10:10:17.205Z] =================================================================================================================== 00:59:10.161 [2024-12-09T10:10:17.205Z] Total : 7122.90 27.82 5175.01 0.00 10385.68 0.00 3033086.21 00:59:10.161 { 00:59:10.161 "results": [ 00:59:10.161 { 00:59:10.161 "job": "NVMe0n1", 00:59:10.161 "core_mask": "0x4", 00:59:10.161 "workload": "verify", 00:59:10.161 "status": "finished", 00:59:10.161 "verify_range": { 00:59:10.161 "start": 0, 00:59:10.161 "length": 16384 00:59:10.161 }, 00:59:10.161 "queue_depth": 128, 00:59:10.161 "io_size": 4096, 00:59:10.161 "runtime": 10.006173, 00:59:10.161 "iops": 7122.903031958372, 00:59:10.161 "mibps": 27.82383996858739, 00:59:10.161 "io_failed": 51782, 00:59:10.161 "io_timeout": 0, 00:59:10.161 "avg_latency_us": 10385.6831797902, 00:59:10.161 "min_latency_us": 465.0480349344978, 00:59:10.161 "max_latency_us": 3033086.211353712 00:59:10.161 } 00:59:10.161 ], 00:59:10.161 "core_count": 1 00:59:10.161 } 00:59:10.161 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 98347 00:59:10.161 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98347 ']' 00:59:10.161 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98347 00:59:10.161 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:59:10.161 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:10.161 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98347 00:59:10.421 killing process with pid 98347 00:59:10.421 Received shutdown signal, test time was about 10.000000 seconds 00:59:10.421 00:59:10.421 Latency(us) 00:59:10.421 [2024-12-09T10:10:17.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:59:10.421 [2024-12-09T10:10:17.465Z] =================================================================================================================== 00:59:10.421 [2024-12-09T10:10:17.465Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98347' 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98347 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98347 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:59:10.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=98633 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 98633 /var/tmp/bdevperf.sock 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98633 ']' 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:10.421 10:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:59:10.421 [2024-12-09 10:10:17.431242] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:59:10.421 [2024-12-09 10:10:17.431398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98633 ] 00:59:10.681 [2024-12-09 10:10:17.566801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:10.681 [2024-12-09 10:10:17.613831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:59:11.619 10:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:11.619 10:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:59:11.619 10:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98633 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:59:11.619 10:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=98662 00:59:11.619 10:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:59:11.619 10:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:59:11.879 NVMe0n1 00:59:11.879 10:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=98712 00:59:11.879 10:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:59:11.879 10:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:59:12.138 Running I/O for 10 seconds... 00:59:13.079 10:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:59:13.079 23273.00 IOPS, 90.91 MiB/s [2024-12-09T10:10:20.123Z] [2024-12-09 10:10:20.026068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.079 [2024-12-09 10:10:20.026113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.079 [2024-12-09 10:10:20.026120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.079 [2024-12-09 10:10:20.026125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.079 [2024-12-09 10:10:20.026130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.079 [2024-12-09 10:10:20.026135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.079 [2024-12-09 10:10:20.026140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.079 [2024-12-09 10:10:20.026145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.079 [2024-12-09 10:10:20.026151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.079 [2024-12-09 10:10:20.026156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.079 [2024-12-09 10:10:20.026160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.079 [2024-12-09 10:10:20.026165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd23250 is same with the state(6) to be set 00:59:13.080 [2024-12-09 10:10:20.026431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.080 [2024-12-09 10:10:20.026992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.080 [2024-12-09 10:10:20.026999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.081 [2024-12-09 10:10:20.027564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.081 [2024-12-09 10:10:20.027571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.027987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.027993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.028000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.028005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.028011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.028016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.028025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.028030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.028041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.028048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.028054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.028060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.028067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.028076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.028084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.028089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.028100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.028106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.028113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.028119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.082 [2024-12-09 10:10:20.028125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.082 [2024-12-09 10:10:20.028130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:59:13.083 [2024-12-09 10:10:20.028330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed0400 is same with the state(6) to be set 00:59:13.083 [2024-12-09 10:10:20.028345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:59:13.083 [2024-12-09 10:10:20.028349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:59:13.083 [2024-12-09 10:10:20.028354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33232 len:8 PRP1 0x0 PRP2 0x0 00:59:13.083 [2024-12-09 10:10:20.028364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:59:13.083 [2024-12-09 10:10:20.028507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:59:13.083 [2024-12-09 10:10:20.028520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:59:13.083 [2024-12-09 10:10:20.028545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:59:13.083 [2024-12-09 10:10:20.028556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:59:13.083 [2024-12-09 10:10:20.028561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe64f30 is same with the state(6) to be set 00:59:13.083 [2024-12-09 10:10:20.028755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:59:13.083 [2024-12-09 10:10:20.028791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe64f30 (9): Bad file descriptor 00:59:13.083 [2024-12-09 10:10:20.028876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:13.083 [2024-12-09 10:10:20.028899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe64f30 with addr=10.0.0.3, port=4420 00:59:13.083 [2024-12-09 10:10:20.028913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe64f30 is same with the state(6) to be set 00:59:13.083 [2024-12-09 10:10:20.028931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe64f30 (9): Bad file descriptor 00:59:13.083 [2024-12-09 10:10:20.028944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:59:13.083 [2024-12-09 10:10:20.028951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:59:13.083 [2024-12-09 10:10:20.028958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:59:13.083 [2024-12-09 10:10:20.028967] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:59:13.083 [2024-12-09 10:10:20.028977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:59:13.083 10:10:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 98712 00:59:14.961 12873.00 IOPS, 50.29 MiB/s [2024-12-09T10:10:22.265Z] 8582.00 IOPS, 33.52 MiB/s [2024-12-09T10:10:22.265Z] [2024-12-09 10:10:22.037270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:15.221 [2024-12-09 10:10:22.037333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe64f30 with addr=10.0.0.3, port=4420 00:59:15.221 [2024-12-09 10:10:22.037343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe64f30 is same with the state(6) to be set 00:59:15.221 [2024-12-09 10:10:22.037360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe64f30 (9): Bad file descriptor 00:59:15.221 [2024-12-09 10:10:22.037374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:59:15.221 [2024-12-09 10:10:22.037380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:59:15.221 [2024-12-09 10:10:22.037387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:59:15.221 [2024-12-09 10:10:22.037395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:59:15.221 [2024-12-09 10:10:22.037403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:59:17.120 6436.50 IOPS, 25.14 MiB/s [2024-12-09T10:10:24.164Z] 5149.20 IOPS, 20.11 MiB/s [2024-12-09T10:10:24.164Z] [2024-12-09 10:10:24.033675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:17.120 [2024-12-09 10:10:24.033717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe64f30 with addr=10.0.0.3, port=4420 00:59:17.120 [2024-12-09 10:10:24.033728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe64f30 is same with the state(6) to be set 00:59:17.120 [2024-12-09 10:10:24.033745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe64f30 (9): Bad file descriptor 00:59:17.120 [2024-12-09 10:10:24.033759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:59:17.120 [2024-12-09 10:10:24.033765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:59:17.120 [2024-12-09 10:10:24.033773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:59:17.120 [2024-12-09 10:10:24.033780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:59:17.120 [2024-12-09 10:10:24.033788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:59:19.023 4291.00 IOPS, 16.76 MiB/s [2024-12-09T10:10:26.067Z] 3678.00 IOPS, 14.37 MiB/s [2024-12-09T10:10:26.067Z] [2024-12-09 10:10:26.029999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:59:19.023 [2024-12-09 10:10:26.030050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:59:19.023 [2024-12-09 10:10:26.030058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:59:19.023 [2024-12-09 10:10:26.030067] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:59:19.023 [2024-12-09 10:10:26.030076] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:59:20.222 3218.25 IOPS, 12.57 MiB/s 00:59:20.222 Latency(us) 00:59:20.222 [2024-12-09T10:10:27.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:59:20.222 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:59:20.222 NVMe0n1 : 8.11 3175.30 12.40 15.79 0.00 40173.99 1810.11 7033243.39 00:59:20.222 [2024-12-09T10:10:27.266Z] =================================================================================================================== 00:59:20.222 [2024-12-09T10:10:27.266Z] Total : 3175.30 12.40 15.79 0.00 40173.99 1810.11 7033243.39 00:59:20.222 { 00:59:20.222 "results": [ 00:59:20.222 { 00:59:20.222 "job": "NVMe0n1", 00:59:20.222 "core_mask": "0x4", 00:59:20.222 "workload": "randread", 00:59:20.222 "status": "finished", 00:59:20.222 "queue_depth": 128, 00:59:20.222 "io_size": 4096, 00:59:20.222 "runtime": 8.10821, 00:59:20.222 "iops": 3175.3000970621138, 00:59:20.222 "mibps": 12.403516004148882, 00:59:20.222 "io_failed": 128, 00:59:20.222 "io_timeout": 0, 00:59:20.222 "avg_latency_us": 40173.993580850154, 00:59:20.222 "min_latency_us": 1810.1100436681222, 00:59:20.222 "max_latency_us": 7033243.388646288 00:59:20.222 } 00:59:20.222 ], 00:59:20.222 "core_count": 1 00:59:20.222 } 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:59:20.222 Attaching 5 probes... 00:59:20.222 1200.071783: reset bdev controller NVMe0 00:59:20.222 1200.152238: reconnect bdev controller NVMe0 00:59:20.222 3208.497563: reconnect delay bdev controller NVMe0 00:59:20.222 3208.515604: reconnect bdev controller NVMe0 00:59:20.222 5204.909144: reconnect delay bdev controller NVMe0 00:59:20.222 5204.925851: reconnect bdev controller NVMe0 00:59:20.222 7201.298007: reconnect delay bdev controller NVMe0 00:59:20.222 7201.317399: reconnect bdev controller NVMe0 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 98662 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 98633 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98633 ']' 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98633 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98633 00:59:20.222 killing process with pid 98633 00:59:20.222 Received shutdown signal, test time was about 8.192487 seconds 00:59:20.222 00:59:20.222 Latency(us) 00:59:20.222 [2024-12-09T10:10:27.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:59:20.222 [2024-12-09T10:10:27.266Z] =================================================================================================================== 00:59:20.222 [2024-12-09T10:10:27.266Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98633' 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98633 00:59:20.222 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98633 00:59:20.483 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:59:20.743 rmmod nvme_tcp 00:59:20.743 rmmod nvme_fabrics 00:59:20.743 rmmod nvme_keyring 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 98055 ']' 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 98055 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98055 ']' 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98055 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98055 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:20.743 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:20.744 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98055' 00:59:20.744 killing process with pid 98055 00:59:20.744 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98055 00:59:20.744 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98055 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:59:21.015 10:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:59:21.015 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:59:21.015 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:59:21.015 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:59:21.015 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:59:21.275 ************************************ 00:59:21.275 END TEST nvmf_timeout 00:59:21.275 ************************************ 00:59:21.275 00:59:21.275 real 0m46.393s 00:59:21.275 user 2m15.911s 00:59:21.275 sys 0m4.448s 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:59:21.275 ************************************ 00:59:21.275 END TEST nvmf_host 00:59:21.275 ************************************ 00:59:21.275 00:59:21.275 real 5m43.897s 00:59:21.275 user 14m34.211s 00:59:21.275 sys 1m3.890s 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:21.275 10:10:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:59:21.275 10:10:28 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:59:21.275 10:10:28 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:59:21.275 10:10:28 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:59:21.275 10:10:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:59:21.275 10:10:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:21.275 10:10:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:21.275 ************************************ 00:59:21.275 START TEST nvmf_target_core_interrupt_mode 00:59:21.275 ************************************ 00:59:21.275 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:59:21.536 * Looking for test storage... 00:59:21.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:21.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:21.536 --rc genhtml_branch_coverage=1 00:59:21.536 --rc genhtml_function_coverage=1 00:59:21.536 --rc genhtml_legend=1 00:59:21.536 --rc geninfo_all_blocks=1 00:59:21.536 --rc geninfo_unexecuted_blocks=1 00:59:21.536 00:59:21.536 ' 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:21.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:21.536 --rc genhtml_branch_coverage=1 00:59:21.536 --rc genhtml_function_coverage=1 00:59:21.536 --rc genhtml_legend=1 00:59:21.536 --rc geninfo_all_blocks=1 00:59:21.536 --rc geninfo_unexecuted_blocks=1 00:59:21.536 00:59:21.536 ' 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:21.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:21.536 --rc genhtml_branch_coverage=1 00:59:21.536 --rc genhtml_function_coverage=1 00:59:21.536 --rc genhtml_legend=1 00:59:21.536 --rc geninfo_all_blocks=1 00:59:21.536 --rc geninfo_unexecuted_blocks=1 00:59:21.536 00:59:21.536 ' 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:21.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:21.536 --rc genhtml_branch_coverage=1 00:59:21.536 --rc genhtml_function_coverage=1 00:59:21.536 --rc genhtml_legend=1 00:59:21.536 --rc geninfo_all_blocks=1 00:59:21.536 --rc geninfo_unexecuted_blocks=1 00:59:21.536 00:59:21.536 ' 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:21.536 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:59:21.537 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:21.537 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:59:21.537 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:59:21.537 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:59:21.537 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:21.537 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:21.537 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:21.537 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:59:21.537 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:59:21.537 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:59:21.537 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:59:21.537 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:59:21.798 ************************************ 00:59:21.798 START TEST nvmf_abort 00:59:21.798 ************************************ 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:59:21.798 * Looking for test storage... 00:59:21.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:21.798 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:21.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:21.798 --rc genhtml_branch_coverage=1 00:59:21.798 --rc genhtml_function_coverage=1 00:59:21.798 --rc genhtml_legend=1 00:59:21.798 --rc geninfo_all_blocks=1 00:59:21.799 --rc geninfo_unexecuted_blocks=1 00:59:21.799 00:59:21.799 ' 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:21.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:21.799 --rc genhtml_branch_coverage=1 00:59:21.799 --rc genhtml_function_coverage=1 00:59:21.799 --rc genhtml_legend=1 00:59:21.799 --rc geninfo_all_blocks=1 00:59:21.799 --rc geninfo_unexecuted_blocks=1 00:59:21.799 00:59:21.799 ' 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:21.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:21.799 --rc genhtml_branch_coverage=1 00:59:21.799 --rc genhtml_function_coverage=1 00:59:21.799 --rc genhtml_legend=1 00:59:21.799 --rc geninfo_all_blocks=1 00:59:21.799 --rc geninfo_unexecuted_blocks=1 00:59:21.799 00:59:21.799 ' 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:21.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:21.799 --rc genhtml_branch_coverage=1 00:59:21.799 --rc genhtml_function_coverage=1 00:59:21.799 --rc genhtml_legend=1 00:59:21.799 --rc geninfo_all_blocks=1 00:59:21.799 --rc geninfo_unexecuted_blocks=1 00:59:21.799 00:59:21.799 ' 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:59:21.799 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:59:22.060 Cannot find device "nvmf_init_br" 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:59:22.060 Cannot find device "nvmf_init_br2" 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:59:22.060 Cannot find device "nvmf_tgt_br" 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:59:22.060 Cannot find device "nvmf_tgt_br2" 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:59:22.060 Cannot find device "nvmf_init_br" 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:59:22.060 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:59:22.060 Cannot find device "nvmf_init_br2" 00:59:22.061 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:59:22.061 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:59:22.061 Cannot find device "nvmf_tgt_br" 00:59:22.061 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:59:22.061 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:59:22.061 Cannot find device "nvmf_tgt_br2" 00:59:22.061 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:59:22.061 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:59:22.061 Cannot find device "nvmf_br" 00:59:22.061 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:59:22.061 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:59:22.061 Cannot find device "nvmf_init_if" 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:59:22.061 Cannot find device "nvmf_init_if2" 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:59:22.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:59:22.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:59:22.061 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:59:22.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:59:22.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:59:22.322 00:59:22.322 --- 10.0.0.3 ping statistics --- 00:59:22.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:22.322 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:59:22.322 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:59:22.322 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:59:22.322 00:59:22.322 --- 10.0.0.4 ping statistics --- 00:59:22.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:22.322 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:59:22.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:59:22.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:59:22.322 00:59:22.322 --- 10.0.0.1 ping statistics --- 00:59:22.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:22.322 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:59:22.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:59:22.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:59:22.322 00:59:22.322 --- 10.0.0.2 ping statistics --- 00:59:22.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:22.322 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:59:22.322 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:22.323 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:59:22.323 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=99133 00:59:22.323 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:59:22.323 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 99133 00:59:22.323 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 99133 ']' 00:59:22.323 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:22.323 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:22.323 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:22.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:22.323 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:22.323 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:59:22.584 [2024-12-09 10:10:29.383035] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:59:22.584 [2024-12-09 10:10:29.383901] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:59:22.584 [2024-12-09 10:10:29.383979] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:59:22.584 [2024-12-09 10:10:29.534234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:59:22.584 [2024-12-09 10:10:29.578301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:59:22.584 [2024-12-09 10:10:29.578417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:59:22.584 [2024-12-09 10:10:29.578426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:59:22.584 [2024-12-09 10:10:29.578430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:59:22.584 [2024-12-09 10:10:29.578434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:59:22.584 [2024-12-09 10:10:29.579339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:59:22.584 [2024-12-09 10:10:29.579618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:22.584 [2024-12-09 10:10:29.579621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:59:22.845 [2024-12-09 10:10:29.647884] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:59:22.845 [2024-12-09 10:10:29.648702] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:59:22.845 [2024-12-09 10:10:29.649728] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:59:22.845 [2024-12-09 10:10:29.649779] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:59:23.415 [2024-12-09 10:10:30.320654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:59:23.415 Malloc0 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:59:23.415 Delay0 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:59:23.415 [2024-12-09 10:10:30.424406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:23.415 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:59:23.675 [2024-12-09 10:10:30.611221] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:59:26.216 Initializing NVMe Controllers 00:59:26.217 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:59:26.217 controller IO queue size 128 less than required 00:59:26.217 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:59:26.217 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:59:26.217 Initialization complete. Launching workers. 00:59:26.217 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 41336 00:59:26.217 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41397, failed to submit 66 00:59:26.217 success 41336, unsuccessful 61, failed 0 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:59:26.217 rmmod nvme_tcp 00:59:26.217 rmmod nvme_fabrics 00:59:26.217 rmmod nvme_keyring 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 99133 ']' 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 99133 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 99133 ']' 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 99133 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99133 00:59:26.217 killing process with pid 99133 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99133' 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 99133 00:59:26.217 10:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 99133 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:59:26.217 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:59:26.218 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:59:26.218 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:59:26.218 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:59:26.218 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:59:26.218 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:59:26.218 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:59:26.218 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:26.218 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:59:26.218 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:26.477 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:59:26.477 00:59:26.477 real 0m4.699s 00:59:26.477 user 0m9.381s 00:59:26.477 sys 0m1.338s 00:59:26.477 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:26.477 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:59:26.477 ************************************ 00:59:26.477 END TEST nvmf_abort 00:59:26.477 ************************************ 00:59:26.477 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:59:26.477 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:59:26.477 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:26.477 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:59:26.477 ************************************ 00:59:26.477 START TEST nvmf_ns_hotplug_stress 00:59:26.477 ************************************ 00:59:26.477 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:59:26.477 * Looking for test storage... 00:59:26.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:59:26.477 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:26.477 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:59:26.477 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:26.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:26.738 --rc genhtml_branch_coverage=1 00:59:26.738 --rc genhtml_function_coverage=1 00:59:26.738 --rc genhtml_legend=1 00:59:26.738 --rc geninfo_all_blocks=1 00:59:26.738 --rc geninfo_unexecuted_blocks=1 00:59:26.738 00:59:26.738 ' 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:26.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:26.738 --rc genhtml_branch_coverage=1 00:59:26.738 --rc genhtml_function_coverage=1 00:59:26.738 --rc genhtml_legend=1 00:59:26.738 --rc geninfo_all_blocks=1 00:59:26.738 --rc geninfo_unexecuted_blocks=1 00:59:26.738 00:59:26.738 ' 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:26.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:26.738 --rc genhtml_branch_coverage=1 00:59:26.738 --rc genhtml_function_coverage=1 00:59:26.738 --rc genhtml_legend=1 00:59:26.738 --rc geninfo_all_blocks=1 00:59:26.738 --rc geninfo_unexecuted_blocks=1 00:59:26.738 00:59:26.738 ' 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:26.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:26.738 --rc genhtml_branch_coverage=1 00:59:26.738 --rc genhtml_function_coverage=1 00:59:26.738 --rc genhtml_legend=1 00:59:26.738 --rc geninfo_all_blocks=1 00:59:26.738 --rc geninfo_unexecuted_blocks=1 00:59:26.738 00:59:26.738 ' 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:26.738 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:59:26.739 Cannot find device "nvmf_init_br" 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:59:26.739 Cannot find device "nvmf_init_br2" 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:59:26.739 Cannot find device "nvmf_tgt_br" 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:59:26.739 Cannot find device "nvmf_tgt_br2" 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:59:26.739 Cannot find device "nvmf_init_br" 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:59:26.739 Cannot find device "nvmf_init_br2" 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:59:26.739 Cannot find device "nvmf_tgt_br" 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:59:26.739 Cannot find device "nvmf_tgt_br2" 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:59:26.739 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:59:27.000 Cannot find device "nvmf_br" 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:59:27.000 Cannot find device "nvmf_init_if" 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:59:27.000 Cannot find device "nvmf_init_if2" 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:59:27.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:59:27.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:59:27.000 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:59:27.000 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:59:27.000 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.145 ms 00:59:27.000 00:59:27.000 --- 10.0.0.3 ping statistics --- 00:59:27.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:27.000 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:59:27.000 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:59:27.000 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:59:27.000 00:59:27.000 --- 10.0.0.4 ping statistics --- 00:59:27.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:27.000 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:59:27.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:59:27.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:59:27.000 00:59:27.000 --- 10.0.0.1 ping statistics --- 00:59:27.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:27.000 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:59:27.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:59:27.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:59:27.000 00:59:27.000 --- 10.0.0.2 ping statistics --- 00:59:27.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:27.000 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:59:27.000 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=99454 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 99454 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 99454 ']' 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:27.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:27.260 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:59:27.260 [2024-12-09 10:10:34.098526] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:59:27.260 [2024-12-09 10:10:34.099303] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 00:59:27.260 [2024-12-09 10:10:34.099343] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:59:27.260 [2024-12-09 10:10:34.252976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:59:27.260 [2024-12-09 10:10:34.298386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:59:27.260 [2024-12-09 10:10:34.298428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:59:27.260 [2024-12-09 10:10:34.298434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:59:27.260 [2024-12-09 10:10:34.298439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:59:27.260 [2024-12-09 10:10:34.298443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:59:27.260 [2024-12-09 10:10:34.299250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:59:27.260 [2024-12-09 10:10:34.299383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:27.260 [2024-12-09 10:10:34.299387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:59:27.521 [2024-12-09 10:10:34.368627] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:59:27.521 [2024-12-09 10:10:34.370041] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:59:27.521 [2024-12-09 10:10:34.370187] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:59:27.521 [2024-12-09 10:10:34.370495] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:59:28.091 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:28.091 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:59:28.091 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:59:28.091 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:28.091 10:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:59:28.091 10:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:59:28.091 10:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:59:28.091 10:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:59:28.350 [2024-12-09 10:10:35.212445] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:28.350 10:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:59:28.609 10:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:59:28.869 [2024-12-09 10:10:35.665064] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:59:28.869 10:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:59:28.869 10:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:59:29.129 Malloc0 00:59:29.129 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:59:29.389 Delay0 00:59:29.389 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:29.648 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:59:29.908 NULL1 00:59:29.908 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:59:30.167 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=99585 00:59:30.168 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:59:30.168 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:30.168 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:30.168 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:30.428 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:59:30.428 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:59:30.687 true 00:59:30.687 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:30.687 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:30.947 10:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:31.206 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:59:31.206 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:59:31.465 true 00:59:31.465 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:31.465 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:31.725 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:31.725 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:59:31.725 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:59:31.984 true 00:59:31.984 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:31.984 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:32.243 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:32.502 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:59:32.502 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:59:32.761 true 00:59:32.761 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:32.761 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:32.761 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:33.021 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:59:33.021 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:59:33.281 true 00:59:33.281 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:33.281 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:33.541 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:33.800 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:59:33.800 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:59:33.800 true 00:59:34.060 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:34.060 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:34.060 10:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:34.321 10:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:59:34.321 10:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:59:34.582 true 00:59:34.582 10:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:34.582 10:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:34.846 10:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:35.116 10:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:59:35.116 10:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:59:35.116 true 00:59:35.382 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:35.382 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:35.382 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:35.642 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:59:35.642 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:59:35.901 true 00:59:35.901 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:35.901 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:36.161 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:36.161 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:59:36.161 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:59:36.421 true 00:59:36.421 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:36.421 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:36.681 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:36.940 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:59:36.940 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:59:37.200 true 00:59:37.200 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:37.200 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:37.200 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:37.459 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:59:37.459 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:59:37.719 true 00:59:37.719 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:37.719 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:37.978 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:38.247 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:59:38.247 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:59:38.247 true 00:59:38.247 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:38.247 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:38.507 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:38.766 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:59:38.766 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:59:39.025 true 00:59:39.025 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:39.025 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:39.285 10:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:39.285 10:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:59:39.285 10:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:59:39.544 true 00:59:39.544 10:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:39.544 10:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:39.803 10:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:40.063 10:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:59:40.063 10:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:59:40.063 true 00:59:40.322 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:40.322 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:40.322 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:40.581 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:59:40.581 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:59:40.840 true 00:59:40.840 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:40.840 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:41.100 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:41.359 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:59:41.359 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:59:41.359 true 00:59:41.359 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:41.359 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:41.619 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:41.878 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:59:41.878 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:59:42.137 true 00:59:42.137 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:42.137 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:42.137 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:42.397 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:59:42.397 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:59:42.657 true 00:59:42.657 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:42.657 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:42.917 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:43.177 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:59:43.177 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:59:43.177 true 00:59:43.177 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:43.177 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:43.437 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:43.697 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:59:43.697 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:59:43.957 true 00:59:43.957 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:43.957 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:43.957 10:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:44.217 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:59:44.217 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:59:44.476 true 00:59:44.476 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:44.476 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:44.736 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:44.995 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:59:44.995 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:59:44.995 true 00:59:44.995 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:44.995 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:45.255 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:45.514 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:59:45.514 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:59:45.774 true 00:59:45.774 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:45.774 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:46.033 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:46.033 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:59:46.033 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:59:46.293 true 00:59:46.293 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:46.293 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:46.552 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:46.811 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:59:46.811 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:59:46.811 true 00:59:47.071 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:47.071 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:47.071 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:47.331 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:59:47.331 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:59:47.591 true 00:59:47.591 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:47.591 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:47.850 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:48.109 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:59:48.109 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:59:48.109 true 00:59:48.109 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:48.109 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:48.368 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:48.627 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:59:48.627 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:59:48.888 true 00:59:48.888 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:48.888 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:48.888 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:49.153 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:59:49.153 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:59:49.500 true 00:59:49.500 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:49.500 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:49.759 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:49.759 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:59:49.759 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:59:50.018 true 00:59:50.018 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:50.018 10:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:50.278 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:50.552 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:59:50.552 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:59:50.552 true 00:59:50.552 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:50.552 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:50.812 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:51.072 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:59:51.072 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:59:51.331 true 00:59:51.331 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:51.331 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:51.592 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:51.592 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:59:51.592 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:59:51.852 true 00:59:51.852 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:51.852 10:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:52.112 10:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:52.372 10:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:59:52.372 10:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:59:52.372 true 00:59:52.372 10:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:52.372 10:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:52.631 10:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:52.891 10:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:59:52.891 10:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:59:53.150 true 00:59:53.150 10:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:53.150 10:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:53.409 10:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:53.409 10:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:59:53.409 10:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:59:53.669 true 00:59:53.669 10:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:53.669 10:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:53.929 10:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:54.188 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:59:54.188 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:59:54.188 true 00:59:54.448 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:54.448 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:54.448 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:54.707 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:59:54.707 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:59:54.967 true 00:59:54.967 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:54.967 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:55.226 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:55.486 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:59:55.486 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:59:55.486 true 00:59:55.486 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:55.486 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:55.745 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:56.004 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:59:56.005 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:59:56.264 true 00:59:56.264 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:56.264 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:56.264 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:56.524 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:59:56.524 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:59:56.783 true 00:59:56.783 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:56.783 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:57.043 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:57.302 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:59:57.302 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:59:57.302 true 00:59:57.302 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:57.302 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:57.561 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:57.820 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:59:57.820 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:59:58.079 true 00:59:58.079 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:58.079 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:58.338 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:58.338 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:59:58.338 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:59:58.597 true 00:59:58.597 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:58.597 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:58.857 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:59.116 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:59:59.116 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:59:59.116 true 00:59:59.376 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:59.376 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:59:59.376 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:59:59.635 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:59:59.635 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:59:59.895 true 00:59:59.895 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 00:59:59.895 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:00:00.154 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:00:00.154 Initializing NVMe Controllers 01:00:00.154 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:00:00.154 Controller IO queue size 128, less than required. 01:00:00.154 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:00:00.154 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:00:00.154 Initialization complete. Launching workers. 01:00:00.154 ======================================================== 01:00:00.154 Latency(us) 01:00:00.154 Device Information : IOPS MiB/s Average min max 01:00:00.154 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 28584.08 13.96 4477.89 2562.52 23759.17 01:00:00.154 ======================================================== 01:00:00.154 Total : 28584.08 13.96 4477.89 2562.52 23759.17 01:00:00.154 01:00:00.154 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 01:00:00.154 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 01:00:00.414 true 01:00:00.414 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99585 01:00:00.414 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (99585) - No such process 01:00:00.414 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 99585 01:00:00.414 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:00:00.672 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:00:00.932 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 01:00:00.932 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 01:00:00.932 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 01:00:00.932 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:00:00.932 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 01:00:01.192 null0 01:00:01.192 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:00:01.192 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:00:01.192 10:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 01:00:01.192 null1 01:00:01.192 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:00:01.192 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:00:01.192 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 01:00:01.451 null2 01:00:01.451 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:00:01.451 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:00:01.451 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 01:00:01.711 null3 01:00:01.711 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:00:01.711 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:00:01.711 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 01:00:01.971 null4 01:00:01.971 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:00:01.971 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:00:01.971 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 01:00:01.971 null5 01:00:02.230 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:00:02.230 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:00:02.230 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 01:00:02.230 null6 01:00:02.230 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:00:02.230 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:00:02.230 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 01:00:02.490 null7 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:00:02.490 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 101015 101017 101018 101020 101022 101024 101026 101028 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:02.491 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:00:02.751 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:00:02.751 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:00:02.751 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:00:02.751 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:00:02.751 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:00:02.751 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:00:02.751 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:00:02.751 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.011 10:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:00:03.011 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.271 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:00:03.532 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:00:03.792 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:00:03.792 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:00:03.792 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:00:03.792 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.792 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:03.793 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:00:04.053 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.053 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.053 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:00:04.053 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.053 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.053 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:00:04.053 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:00:04.053 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:00:04.053 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:00:04.053 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:00:04.053 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:00:04.053 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:00:04.053 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:00:04.313 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:00:04.313 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.313 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.313 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:00:04.313 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.313 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.314 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:00:04.576 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:00:04.576 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:00:04.576 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:00:04.576 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:00:04.576 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:00:04.576 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:00:04.576 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:00:04.576 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.576 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.576 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:00:04.576 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:00:04.843 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:00:05.116 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:00:05.116 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:00:05.116 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:00:05.116 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:00:05.116 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.116 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.116 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:00:05.116 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:00:05.116 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:00:05.116 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.116 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.116 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:00:05.116 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.116 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.116 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:00:05.376 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.376 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.376 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:00:05.376 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:00:05.377 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:00:05.637 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:00:05.638 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:00:05.898 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:00:06.159 10:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.159 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:00:06.419 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:00:06.420 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.420 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.420 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.679 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:00:06.939 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:00:06.939 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:00:06.939 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:00:06.939 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.939 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.939 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:06.939 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:06.939 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:00:06.939 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:00:06.939 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:00:06.939 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:00:06.939 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:07.199 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:07.199 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:07.199 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:07.199 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:07.199 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:07.199 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:00:07.199 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:07.199 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:07.199 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:07.199 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:07.199 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:07.199 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:07.459 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:00:07.459 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:00:07.459 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 01:00:07.459 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 01:00:07.459 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 01:00:07.459 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 01:00:07.459 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:00:07.459 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 01:00:07.459 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:00:07.460 rmmod nvme_tcp 01:00:07.460 rmmod nvme_fabrics 01:00:07.460 rmmod nvme_keyring 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 99454 ']' 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 99454 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 99454 ']' 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 99454 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99454 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99454' 01:00:07.460 killing process with pid 99454 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 99454 01:00:07.460 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 99454 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:00:07.720 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 01:00:07.980 ************************************ 01:00:07.980 END TEST nvmf_ns_hotplug_stress 01:00:07.980 ************************************ 01:00:07.980 01:00:07.980 real 0m41.610s 01:00:07.980 user 3m4.439s 01:00:07.980 sys 0m16.433s 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:07.980 10:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:00:08.240 ************************************ 01:00:08.240 START TEST nvmf_delete_subsystem 01:00:08.240 ************************************ 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 01:00:08.240 * Looking for test storage... 01:00:08.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:00:08.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:08.240 --rc genhtml_branch_coverage=1 01:00:08.240 --rc genhtml_function_coverage=1 01:00:08.240 --rc genhtml_legend=1 01:00:08.240 --rc geninfo_all_blocks=1 01:00:08.240 --rc geninfo_unexecuted_blocks=1 01:00:08.240 01:00:08.240 ' 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:00:08.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:08.240 --rc genhtml_branch_coverage=1 01:00:08.240 --rc genhtml_function_coverage=1 01:00:08.240 --rc genhtml_legend=1 01:00:08.240 --rc geninfo_all_blocks=1 01:00:08.240 --rc geninfo_unexecuted_blocks=1 01:00:08.240 01:00:08.240 ' 01:00:08.240 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:00:08.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:08.241 --rc genhtml_branch_coverage=1 01:00:08.241 --rc genhtml_function_coverage=1 01:00:08.241 --rc genhtml_legend=1 01:00:08.241 --rc geninfo_all_blocks=1 01:00:08.241 --rc geninfo_unexecuted_blocks=1 01:00:08.241 01:00:08.241 ' 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:00:08.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:08.241 --rc genhtml_branch_coverage=1 01:00:08.241 --rc genhtml_function_coverage=1 01:00:08.241 --rc genhtml_legend=1 01:00:08.241 --rc geninfo_all_blocks=1 01:00:08.241 --rc geninfo_unexecuted_blocks=1 01:00:08.241 01:00:08.241 ' 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:00:08.241 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:08.500 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:00:08.501 Cannot find device "nvmf_init_br" 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:00:08.501 Cannot find device "nvmf_init_br2" 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:00:08.501 Cannot find device "nvmf_tgt_br" 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:00:08.501 Cannot find device "nvmf_tgt_br2" 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:00:08.501 Cannot find device "nvmf_init_br" 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:00:08.501 Cannot find device "nvmf_init_br2" 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:00:08.501 Cannot find device "nvmf_tgt_br" 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:00:08.501 Cannot find device "nvmf_tgt_br2" 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:00:08.501 Cannot find device "nvmf_br" 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:00:08.501 Cannot find device "nvmf_init_if" 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:00:08.501 Cannot find device "nvmf_init_if2" 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:00:08.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:00:08.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:00:08.501 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:00:08.760 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:00:08.760 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:00:08.760 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:00:08.760 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:00:08.761 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:00:08.761 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 01:00:08.761 01:00:08.761 --- 10.0.0.3 ping statistics --- 01:00:08.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:08.761 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:00:08.761 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:00:08.761 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.111 ms 01:00:08.761 01:00:08.761 --- 10.0.0.4 ping statistics --- 01:00:08.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:08.761 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:00:08.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:00:08.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 01:00:08.761 01:00:08.761 --- 10.0.0.1 ping statistics --- 01:00:08.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:08.761 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:00:08.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:00:08.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 01:00:08.761 01:00:08.761 --- 10.0.0.2 ping statistics --- 01:00:08.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:08.761 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=102434 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 102434 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 102434 ']' 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:08.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:08.761 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:09.021 [2024-12-09 10:11:15.825281] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:00:09.021 [2024-12-09 10:11:15.826108] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:00:09.022 [2024-12-09 10:11:15.826156] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:09.022 [2024-12-09 10:11:15.977039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:00:09.022 [2024-12-09 10:11:16.024147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:09.022 [2024-12-09 10:11:16.024370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:09.022 [2024-12-09 10:11:16.024418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:09.022 [2024-12-09 10:11:16.024454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:09.022 [2024-12-09 10:11:16.024513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:09.022 [2024-12-09 10:11:16.025414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:09.022 [2024-12-09 10:11:16.025414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:00:09.280 [2024-12-09 10:11:16.096510] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:00:09.280 [2024-12-09 10:11:16.097827] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:00:09.280 [2024-12-09 10:11:16.097888] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:00:09.851 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:09.851 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 01:00:09.851 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:00:09.851 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 01:00:09.851 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:09.851 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:09.851 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:00:09.851 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.851 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:09.851 [2024-12-09 10:11:16.762654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:09.851 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.851 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:09.852 [2024-12-09 10:11:16.795047] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:09.852 NULL1 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:09.852 Delay0 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=102485 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 01:00:09.852 10:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 01:00:10.111 [2024-12-09 10:11:17.019996] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:00:12.021 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:00:12.021 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:12.021 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 starting I/O failed: -6 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 starting I/O failed: -6 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 starting I/O failed: -6 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 starting I/O failed: -6 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 starting I/O failed: -6 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 starting I/O failed: -6 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 starting I/O failed: -6 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 starting I/O failed: -6 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 starting I/O failed: -6 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 starting I/O failed: -6 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 starting I/O failed: -6 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 [2024-12-09 10:11:19.051746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f67e0 is same with the state(6) to be set 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Read completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.021 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 [2024-12-09 10:11:19.052599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f5c30 is same with the state(6) to be set 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 starting I/O failed: -6 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 starting I/O failed: -6 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 starting I/O failed: -6 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 starting I/O failed: -6 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 starting I/O failed: -6 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 starting I/O failed: -6 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 starting I/O failed: -6 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 starting I/O failed: -6 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 starting I/O failed: -6 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 starting I/O failed: -6 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 [2024-12-09 10:11:19.053141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f838800d4d0 is same with the state(6) to be set 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Write completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:12.022 Read completed with error (sct=0, sc=8) 01:00:13.403 [2024-12-09 10:11:20.033879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eaaa0 is same with the state(6) to be set 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Write completed with error (sct=0, sc=8) 01:00:13.403 Write completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Write completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Write completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.403 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 [2024-12-09 10:11:20.049965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f838800d800 is same with the state(6) to be set 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 [2024-12-09 10:11:20.050145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f838800d020 is same with the state(6) to be set 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 [2024-12-09 10:11:20.051045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f5a50 is same with the state(6) to be set 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Write completed with error (sct=0, sc=8) 01:00:13.404 Read completed with error (sct=0, sc=8) 01:00:13.404 [2024-12-09 10:11:20.051205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f8ea0 is same with the state(6) to be set 01:00:13.404 Initializing NVMe Controllers 01:00:13.404 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:00:13.404 Controller IO queue size 128, less than required. 01:00:13.404 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:00:13.404 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:00:13.404 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:00:13.404 Initialization complete. Launching workers. 01:00:13.404 ======================================================== 01:00:13.404 Latency(us) 01:00:13.404 Device Information : IOPS MiB/s Average min max 01:00:13.404 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.14 0.08 923186.06 863.43 2002149.93 01:00:13.404 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.72 0.08 1025060.94 270.07 2002112.82 01:00:13.404 ======================================================== 01:00:13.404 Total : 324.87 0.16 972645.93 270.07 2002149.93 01:00:13.404 01:00:13.404 [2024-12-09 10:11:20.051633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eaaa0 (9): Bad file descriptor 01:00:13.404 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 01:00:13.404 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:13.404 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 01:00:13.404 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 102485 01:00:13.404 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 102485 01:00:13.664 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (102485) - No such process 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 102485 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 102485 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 102485 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:13.664 [2024-12-09 10:11:20.587159] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=102532 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102532 01:00:13.664 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:00:13.925 [2024-12-09 10:11:20.782638] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:00:14.185 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:00:14.185 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102532 01:00:14.185 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:00:14.754 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:00:14.754 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102532 01:00:14.754 10:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:00:15.324 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:00:15.324 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102532 01:00:15.324 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:00:15.583 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:00:15.583 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102532 01:00:15.583 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:00:16.153 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:00:16.153 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102532 01:00:16.153 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:00:16.723 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:00:16.723 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102532 01:00:16.723 10:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:00:16.982 Initializing NVMe Controllers 01:00:16.982 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:00:16.982 Controller IO queue size 128, less than required. 01:00:16.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:00:16.982 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:00:16.982 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:00:16.982 Initialization complete. Launching workers. 01:00:16.982 ======================================================== 01:00:16.982 Latency(us) 01:00:16.982 Device Information : IOPS MiB/s Average min max 01:00:16.982 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003519.57 1000196.50 1015153.72 01:00:16.982 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005163.65 1000187.62 1016971.59 01:00:16.982 ======================================================== 01:00:16.982 Total : 256.00 0.12 1004341.61 1000187.62 1016971.59 01:00:16.982 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102532 01:00:17.242 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (102532) - No such process 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 102532 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:00:17.242 rmmod nvme_tcp 01:00:17.242 rmmod nvme_fabrics 01:00:17.242 rmmod nvme_keyring 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 102434 ']' 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 102434 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 102434 ']' 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 102434 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:17.242 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102434 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:17.502 killing process with pid 102434 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102434' 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 102434 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 102434 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:00:17.502 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 01:00:17.761 01:00:17.761 real 0m9.700s 01:00:17.761 user 0m25.097s 01:00:17.761 sys 0m1.741s 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:17.761 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:00:17.762 ************************************ 01:00:17.762 END TEST nvmf_delete_subsystem 01:00:17.762 ************************************ 01:00:17.762 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 01:00:17.762 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:00:17.762 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:17.762 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:00:18.024 ************************************ 01:00:18.024 START TEST nvmf_host_management 01:00:18.024 ************************************ 01:00:18.024 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 01:00:18.024 * Looking for test storage... 01:00:18.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:00:18.024 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:00:18.024 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 01:00:18.024 10:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:00:18.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:18.024 --rc genhtml_branch_coverage=1 01:00:18.024 --rc genhtml_function_coverage=1 01:00:18.024 --rc genhtml_legend=1 01:00:18.024 --rc geninfo_all_blocks=1 01:00:18.024 --rc geninfo_unexecuted_blocks=1 01:00:18.024 01:00:18.024 ' 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:00:18.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:18.024 --rc genhtml_branch_coverage=1 01:00:18.024 --rc genhtml_function_coverage=1 01:00:18.024 --rc genhtml_legend=1 01:00:18.024 --rc geninfo_all_blocks=1 01:00:18.024 --rc geninfo_unexecuted_blocks=1 01:00:18.024 01:00:18.024 ' 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:00:18.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:18.024 --rc genhtml_branch_coverage=1 01:00:18.024 --rc genhtml_function_coverage=1 01:00:18.024 --rc genhtml_legend=1 01:00:18.024 --rc geninfo_all_blocks=1 01:00:18.024 --rc geninfo_unexecuted_blocks=1 01:00:18.024 01:00:18.024 ' 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:00:18.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:18.024 --rc genhtml_branch_coverage=1 01:00:18.024 --rc genhtml_function_coverage=1 01:00:18.024 --rc genhtml_legend=1 01:00:18.024 --rc geninfo_all_blocks=1 01:00:18.024 --rc geninfo_unexecuted_blocks=1 01:00:18.024 01:00:18.024 ' 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:00:18.024 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:00:18.025 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:00:18.303 Cannot find device "nvmf_init_br" 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:00:18.303 Cannot find device "nvmf_init_br2" 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:00:18.303 Cannot find device "nvmf_tgt_br" 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:00:18.303 Cannot find device "nvmf_tgt_br2" 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:00:18.303 Cannot find device "nvmf_init_br" 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:00:18.303 Cannot find device "nvmf_init_br2" 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:00:18.303 Cannot find device "nvmf_tgt_br" 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:00:18.303 Cannot find device "nvmf_tgt_br2" 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:00:18.303 Cannot find device "nvmf_br" 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:00:18.303 Cannot find device "nvmf_init_if" 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:00:18.303 Cannot find device "nvmf_init_if2" 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:00:18.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:00:18.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:00:18.303 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:00:18.568 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:00:18.568 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 01:00:18.568 01:00:18.568 --- 10.0.0.3 ping statistics --- 01:00:18.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:18.568 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 01:00:18.568 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:00:18.568 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:00:18.568 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 01:00:18.568 01:00:18.568 --- 10.0.0.4 ping statistics --- 01:00:18.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:18.569 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:00:18.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:00:18.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 01:00:18.569 01:00:18.569 --- 10.0.0.1 ping statistics --- 01:00:18.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:18.569 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:00:18.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:00:18.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 01:00:18.569 01:00:18.569 --- 10.0.0.2 ping statistics --- 01:00:18.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:18.569 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=102813 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 102813 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 102813 ']' 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:18.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:18.569 10:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:18.569 [2024-12-09 10:11:25.604729] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:00:18.569 [2024-12-09 10:11:25.605464] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:00:18.569 [2024-12-09 10:11:25.605536] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:18.828 [2024-12-09 10:11:25.753639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:00:18.828 [2024-12-09 10:11:25.797104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:18.828 [2024-12-09 10:11:25.797155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:18.828 [2024-12-09 10:11:25.797161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:18.828 [2024-12-09 10:11:25.797165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:18.829 [2024-12-09 10:11:25.797169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:18.829 [2024-12-09 10:11:25.798044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:00:18.829 [2024-12-09 10:11:25.798147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:00:18.829 [2024-12-09 10:11:25.798334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:00:18.829 [2024-12-09 10:11:25.798338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:00:18.829 [2024-12-09 10:11:25.869342] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:00:18.829 [2024-12-09 10:11:25.869796] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:00:18.829 [2024-12-09 10:11:25.870659] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:00:18.829 [2024-12-09 10:11:25.871000] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:00:18.829 [2024-12-09 10:11:25.871106] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:19.767 [2024-12-09 10:11:26.552791] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:19.767 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:19.768 Malloc0 01:00:19.768 [2024-12-09 10:11:26.651719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=102886 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 102886 /var/tmp/bdevperf.sock 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 102886 ']' 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:00:19.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:00:19.768 { 01:00:19.768 "params": { 01:00:19.768 "name": "Nvme$subsystem", 01:00:19.768 "trtype": "$TEST_TRANSPORT", 01:00:19.768 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:19.768 "adrfam": "ipv4", 01:00:19.768 "trsvcid": "$NVMF_PORT", 01:00:19.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:19.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:19.768 "hdgst": ${hdgst:-false}, 01:00:19.768 "ddgst": ${ddgst:-false} 01:00:19.768 }, 01:00:19.768 "method": "bdev_nvme_attach_controller" 01:00:19.768 } 01:00:19.768 EOF 01:00:19.768 )") 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:00:19.768 10:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:00:19.768 "params": { 01:00:19.768 "name": "Nvme0", 01:00:19.768 "trtype": "tcp", 01:00:19.768 "traddr": "10.0.0.3", 01:00:19.768 "adrfam": "ipv4", 01:00:19.768 "trsvcid": "4420", 01:00:19.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:00:19.768 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:00:19.768 "hdgst": false, 01:00:19.768 "ddgst": false 01:00:19.768 }, 01:00:19.768 "method": "bdev_nvme_attach_controller" 01:00:19.768 }' 01:00:19.768 [2024-12-09 10:11:26.761597] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:00:19.768 [2024-12-09 10:11:26.761681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102886 ] 01:00:20.028 [2024-12-09 10:11:26.915556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:20.028 [2024-12-09 10:11:26.959030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:20.287 Running I/O for 10 seconds... 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1262 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1262 -ge 100 ']' 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:20.858 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:20.858 task offset: 46336 on job bdev=Nvme0n1 fails 01:00:20.858 01:00:20.858 Latency(us) 01:00:20.858 [2024-12-09T10:11:27.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:20.858 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:00:20.858 Job: Nvme0n1 ended in about 0.62 seconds with error 01:00:20.858 Verification LBA range: start 0x0 length 0x400 01:00:20.858 Nvme0n1 : 0.62 2179.32 136.21 103.78 0.00 27460.31 1509.62 25642.03 01:00:20.858 [2024-12-09T10:11:27.902Z] =================================================================================================================== 01:00:20.858 [2024-12-09T10:11:27.902Z] Total : 2179.32 136.21 103.78 0.00 27460.31 1509.62 25642.03 01:00:20.858 [2024-12-09 10:11:27.727557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.858 [2024-12-09 10:11:27.727804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.858 [2024-12-09 10:11:27.727813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.727991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.727998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.859 [2024-12-09 10:11:27.728301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.859 [2024-12-09 10:11:27.728308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.860 [2024-12-09 10:11:27.728313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.860 [2024-12-09 10:11:27.728320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.860 [2024-12-09 10:11:27.728325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.860 [2024-12-09 10:11:27.728332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.860 [2024-12-09 10:11:27.728337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.860 [2024-12-09 10:11:27.728344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.860 [2024-12-09 10:11:27.728350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.860 [2024-12-09 10:11:27.728356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.860 [2024-12-09 10:11:27.728362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.860 [2024-12-09 10:11:27.728369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.860 [2024-12-09 10:11:27.728374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.860 [2024-12-09 10:11:27.728381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:00:20.860 [2024-12-09 10:11:27.728387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.860 [2024-12-09 10:11:27.729379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:00:20.860 [2024-12-09 10:11:27.731150] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:00:20.860 [2024-12-09 10:11:27.731175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfe130 (9): Bad file descriptor 01:00:20.860 [2024-12-09 10:11:27.731990] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 01:00:20.860 [2024-12-09 10:11:27.732063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:00:20.860 [2024-12-09 10:11:27.732090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:00:20.860 [2024-12-09 10:11:27.732103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 01:00:20.860 [2024-12-09 10:11:27.732110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 01:00:20.860 [2024-12-09 10:11:27.732116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:00:20.860 [2024-12-09 10:11:27.732122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfe130 01:00:20.860 [2024-12-09 10:11:27.732147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfe130 (9): Bad file descriptor 01:00:20.860 [2024-12-09 10:11:27.732159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:00:20.860 [2024-12-09 10:11:27.732167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:00:20.860 [2024-12-09 10:11:27.732175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:00:20.860 [2024-12-09 10:11:27.732183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:00:20.860 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:20.860 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:00:20.860 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:20.860 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:20.860 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:20.860 10:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 102886 01:00:21.799 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (102886) - No such process 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:00:21.799 { 01:00:21.799 "params": { 01:00:21.799 "name": "Nvme$subsystem", 01:00:21.799 "trtype": "$TEST_TRANSPORT", 01:00:21.799 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:21.799 "adrfam": "ipv4", 01:00:21.799 "trsvcid": "$NVMF_PORT", 01:00:21.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:21.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:21.799 "hdgst": ${hdgst:-false}, 01:00:21.799 "ddgst": ${ddgst:-false} 01:00:21.799 }, 01:00:21.799 "method": "bdev_nvme_attach_controller" 01:00:21.799 } 01:00:21.799 EOF 01:00:21.799 )") 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:00:21.799 10:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:00:21.799 "params": { 01:00:21.799 "name": "Nvme0", 01:00:21.799 "trtype": "tcp", 01:00:21.799 "traddr": "10.0.0.3", 01:00:21.799 "adrfam": "ipv4", 01:00:21.799 "trsvcid": "4420", 01:00:21.799 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:00:21.799 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:00:21.799 "hdgst": false, 01:00:21.799 "ddgst": false 01:00:21.799 }, 01:00:21.799 "method": "bdev_nvme_attach_controller" 01:00:21.799 }' 01:00:21.799 [2024-12-09 10:11:28.811920] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:00:21.799 [2024-12-09 10:11:28.811975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102931 ] 01:00:22.059 [2024-12-09 10:11:28.964061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:22.059 [2024-12-09 10:11:29.010333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:22.318 Running I/O for 1 seconds... 01:00:23.271 2240.00 IOPS, 140.00 MiB/s 01:00:23.271 Latency(us) 01:00:23.271 [2024-12-09T10:11:30.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:23.271 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:00:23.271 Verification LBA range: start 0x0 length 0x400 01:00:23.271 Nvme0n1 : 1.01 2284.75 142.80 0.00 0.00 27563.95 3691.77 26328.87 01:00:23.271 [2024-12-09T10:11:30.315Z] =================================================================================================================== 01:00:23.271 [2024-12-09T10:11:30.315Z] Total : 2284.75 142.80 0.00 0.00 27563.95 3691.77 26328.87 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:00:23.531 rmmod nvme_tcp 01:00:23.531 rmmod nvme_fabrics 01:00:23.531 rmmod nvme_keyring 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 102813 ']' 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 102813 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 102813 ']' 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 102813 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102813 01:00:23.531 killing process with pid 102813 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102813' 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 102813 01:00:23.531 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 102813 01:00:23.791 [2024-12-09 10:11:30.646215] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:00:23.791 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:00:24.051 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:00:24.051 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:00:24.051 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:00:24.051 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 01:00:24.051 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:24.051 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:00:24.051 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:24.051 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 01:00:24.051 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 01:00:24.051 01:00:24.051 real 0m6.154s 01:00:24.051 user 0m17.868s 01:00:24.051 sys 0m2.283s 01:00:24.051 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:24.051 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:00:24.051 ************************************ 01:00:24.051 END TEST nvmf_host_management 01:00:24.051 ************************************ 01:00:24.051 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 01:00:24.051 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:00:24.051 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:24.051 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:00:24.051 ************************************ 01:00:24.051 START TEST nvmf_lvol 01:00:24.051 ************************************ 01:00:24.051 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 01:00:24.312 * Looking for test storage... 01:00:24.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:24.312 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:00:24.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:24.313 --rc genhtml_branch_coverage=1 01:00:24.313 --rc genhtml_function_coverage=1 01:00:24.313 --rc genhtml_legend=1 01:00:24.313 --rc geninfo_all_blocks=1 01:00:24.313 --rc geninfo_unexecuted_blocks=1 01:00:24.313 01:00:24.313 ' 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:00:24.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:24.313 --rc genhtml_branch_coverage=1 01:00:24.313 --rc genhtml_function_coverage=1 01:00:24.313 --rc genhtml_legend=1 01:00:24.313 --rc geninfo_all_blocks=1 01:00:24.313 --rc geninfo_unexecuted_blocks=1 01:00:24.313 01:00:24.313 ' 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:00:24.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:24.313 --rc genhtml_branch_coverage=1 01:00:24.313 --rc genhtml_function_coverage=1 01:00:24.313 --rc genhtml_legend=1 01:00:24.313 --rc geninfo_all_blocks=1 01:00:24.313 --rc geninfo_unexecuted_blocks=1 01:00:24.313 01:00:24.313 ' 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:00:24.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:24.313 --rc genhtml_branch_coverage=1 01:00:24.313 --rc genhtml_function_coverage=1 01:00:24.313 --rc genhtml_legend=1 01:00:24.313 --rc geninfo_all_blocks=1 01:00:24.313 --rc geninfo_unexecuted_blocks=1 01:00:24.313 01:00:24.313 ' 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:00:24.313 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:00:24.314 Cannot find device "nvmf_init_br" 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:00:24.314 Cannot find device "nvmf_init_br2" 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:00:24.314 Cannot find device "nvmf_tgt_br" 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 01:00:24.314 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:00:24.574 Cannot find device "nvmf_tgt_br2" 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:00:24.574 Cannot find device "nvmf_init_br" 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:00:24.574 Cannot find device "nvmf_init_br2" 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:00:24.574 Cannot find device "nvmf_tgt_br" 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:00:24.574 Cannot find device "nvmf_tgt_br2" 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:00:24.574 Cannot find device "nvmf_br" 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:00:24.574 Cannot find device "nvmf_init_if" 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:00:24.574 Cannot find device "nvmf_init_if2" 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:00:24.574 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:00:24.574 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:00:24.574 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:00:24.834 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:00:24.834 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 01:00:24.834 01:00:24.834 --- 10.0.0.3 ping statistics --- 01:00:24.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:24.834 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:00:24.834 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:00:24.834 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 01:00:24.834 01:00:24.834 --- 10.0.0.4 ping statistics --- 01:00:24.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:24.834 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:00:24.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:00:24.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 01:00:24.834 01:00:24.834 --- 10.0.0.1 ping statistics --- 01:00:24.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:24.834 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:00:24.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:00:24.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 01:00:24.834 01:00:24.834 --- 10.0.0.2 ping statistics --- 01:00:24.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:24.834 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 01:00:24.834 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:00:24.835 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=103198 01:00:24.835 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 103198 01:00:24.835 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 01:00:24.835 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 103198 ']' 01:00:24.835 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:24.835 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:24.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:24.835 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:24.835 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:24.835 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:00:24.835 [2024-12-09 10:11:31.751926] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:00:24.835 [2024-12-09 10:11:31.752748] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:00:24.835 [2024-12-09 10:11:31.752802] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:25.094 [2024-12-09 10:11:31.898914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:00:25.094 [2024-12-09 10:11:31.940000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:25.094 [2024-12-09 10:11:31.940069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:25.094 [2024-12-09 10:11:31.940075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:25.094 [2024-12-09 10:11:31.940080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:25.094 [2024-12-09 10:11:31.940084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:25.094 [2024-12-09 10:11:31.940981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:00:25.094 [2024-12-09 10:11:31.941083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:25.094 [2024-12-09 10:11:31.941086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:00:25.094 [2024-12-09 10:11:32.009969] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:00:25.094 [2024-12-09 10:11:32.010688] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:00:25.094 [2024-12-09 10:11:32.011201] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:00:25.094 [2024-12-09 10:11:32.011495] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:00:25.665 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:25.665 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 01:00:25.665 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:00:25.665 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 01:00:25.665 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:00:25.665 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:25.665 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:00:25.925 [2024-12-09 10:11:32.846133] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:25.925 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:00:26.185 10:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 01:00:26.185 10:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:00:26.445 10:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 01:00:26.445 10:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 01:00:26.704 10:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 01:00:26.964 10:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fb153850-1347-463e-a776-63e936ff3fd0 01:00:26.964 10:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fb153850-1347-463e-a776-63e936ff3fd0 lvol 20 01:00:27.224 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f1248bc4-353b-46ee-b9a2-2d59a8718591 01:00:27.224 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:00:27.224 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f1248bc4-353b-46ee-b9a2-2d59a8718591 01:00:27.483 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:00:27.743 [2024-12-09 10:11:34.605903] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:00:27.743 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:00:28.002 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=103336 01:00:28.002 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 01:00:28.002 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 01:00:28.941 10:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot f1248bc4-353b-46ee-b9a2-2d59a8718591 MY_SNAPSHOT 01:00:29.200 10:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4b4a69a4-54f3-46fa-9eba-1109bae4994d 01:00:29.200 10:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize f1248bc4-353b-46ee-b9a2-2d59a8718591 30 01:00:29.460 10:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 4b4a69a4-54f3-46fa-9eba-1109bae4994d MY_CLONE 01:00:29.719 10:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a9c64c86-3725-4fa8-9734-7eb3901fcbe1 01:00:29.719 10:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a9c64c86-3725-4fa8-9734-7eb3901fcbe1 01:00:30.290 10:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 103336 01:00:38.428 Initializing NVMe Controllers 01:00:38.428 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 01:00:38.428 Controller IO queue size 128, less than required. 01:00:38.428 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:00:38.428 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 01:00:38.428 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 01:00:38.428 Initialization complete. Launching workers. 01:00:38.428 ======================================================== 01:00:38.428 Latency(us) 01:00:38.428 Device Information : IOPS MiB/s Average min max 01:00:38.428 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13073.57 51.07 9792.07 3056.95 45691.73 01:00:38.428 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13114.57 51.23 9758.89 3838.86 79578.77 01:00:38.428 ======================================================== 01:00:38.428 Total : 26188.14 102.30 9775.45 3056.95 79578.77 01:00:38.428 01:00:38.428 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:00:38.428 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f1248bc4-353b-46ee-b9a2-2d59a8718591 01:00:38.687 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb153850-1347-463e-a776-63e936ff3fd0 01:00:38.687 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:00:38.947 rmmod nvme_tcp 01:00:38.947 rmmod nvme_fabrics 01:00:38.947 rmmod nvme_keyring 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 103198 ']' 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 103198 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 103198 ']' 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 103198 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103198 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:38.947 killing process with pid 103198 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103198' 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 103198 01:00:38.947 10:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 103198 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:00:39.207 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 01:00:39.467 01:00:39.467 real 0m15.335s 01:00:39.467 user 0m54.244s 01:00:39.467 sys 0m5.420s 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:00:39.467 ************************************ 01:00:39.467 END TEST nvmf_lvol 01:00:39.467 ************************************ 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:00:39.467 ************************************ 01:00:39.467 START TEST nvmf_lvs_grow 01:00:39.467 ************************************ 01:00:39.467 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 01:00:39.728 * Looking for test storage... 01:00:39.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:00:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:39.728 --rc genhtml_branch_coverage=1 01:00:39.728 --rc genhtml_function_coverage=1 01:00:39.728 --rc genhtml_legend=1 01:00:39.728 --rc geninfo_all_blocks=1 01:00:39.728 --rc geninfo_unexecuted_blocks=1 01:00:39.728 01:00:39.728 ' 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:00:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:39.728 --rc genhtml_branch_coverage=1 01:00:39.728 --rc genhtml_function_coverage=1 01:00:39.728 --rc genhtml_legend=1 01:00:39.728 --rc geninfo_all_blocks=1 01:00:39.728 --rc geninfo_unexecuted_blocks=1 01:00:39.728 01:00:39.728 ' 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:00:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:39.728 --rc genhtml_branch_coverage=1 01:00:39.728 --rc genhtml_function_coverage=1 01:00:39.728 --rc genhtml_legend=1 01:00:39.728 --rc geninfo_all_blocks=1 01:00:39.728 --rc geninfo_unexecuted_blocks=1 01:00:39.728 01:00:39.728 ' 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:00:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:39.728 --rc genhtml_branch_coverage=1 01:00:39.728 --rc genhtml_function_coverage=1 01:00:39.728 --rc genhtml_legend=1 01:00:39.728 --rc geninfo_all_blocks=1 01:00:39.728 --rc geninfo_unexecuted_blocks=1 01:00:39.728 01:00:39.728 ' 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:39.728 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:00:39.729 Cannot find device "nvmf_init_br" 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:00:39.729 Cannot find device "nvmf_init_br2" 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:00:39.729 Cannot find device "nvmf_tgt_br" 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 01:00:39.729 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:00:39.989 Cannot find device "nvmf_tgt_br2" 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:00:39.989 Cannot find device "nvmf_init_br" 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:00:39.989 Cannot find device "nvmf_init_br2" 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:00:39.989 Cannot find device "nvmf_tgt_br" 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:00:39.989 Cannot find device "nvmf_tgt_br2" 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:00:39.989 Cannot find device "nvmf_br" 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:00:39.989 Cannot find device "nvmf_init_if" 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:00:39.989 Cannot find device "nvmf_init_if2" 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:00:39.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:00:39.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:00:39.989 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:00:39.990 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:00:39.990 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:00:39.990 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:00:39.990 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:00:39.990 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:00:39.990 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:00:39.990 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:00:39.990 10:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:00:39.990 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:00:39.990 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:00:39.990 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:00:39.990 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:00:39.990 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:00:40.249 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:00:40.249 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:00:40.249 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:00:40.249 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:00:40.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:00:40.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 01:00:40.250 01:00:40.250 --- 10.0.0.3 ping statistics --- 01:00:40.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:40.250 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:00:40.250 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:00:40.250 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 01:00:40.250 01:00:40.250 --- 10.0.0.4 ping statistics --- 01:00:40.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:40.250 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:00:40.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:00:40.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 01:00:40.250 01:00:40.250 --- 10.0.0.1 ping statistics --- 01:00:40.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:40.250 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:00:40.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:00:40.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 01:00:40.250 01:00:40.250 --- 10.0.0.2 ping statistics --- 01:00:40.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:40.250 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=103751 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 103751 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 103751 ']' 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:40.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:40.250 10:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:00:40.250 [2024-12-09 10:11:47.180020] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:00:40.250 [2024-12-09 10:11:47.180827] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:00:40.250 [2024-12-09 10:11:47.180875] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:40.509 [2024-12-09 10:11:47.326455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:40.509 [2024-12-09 10:11:47.369407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:40.509 [2024-12-09 10:11:47.369458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:40.509 [2024-12-09 10:11:47.369464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:40.510 [2024-12-09 10:11:47.369469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:40.510 [2024-12-09 10:11:47.369496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:40.510 [2024-12-09 10:11:47.369747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:40.510 [2024-12-09 10:11:47.438186] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:00:40.510 [2024-12-09 10:11:47.438423] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:00:41.079 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:41.079 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 01:00:41.079 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:00:41.079 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 01:00:41.079 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:00:41.079 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:41.079 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:00:41.339 [2024-12-09 10:11:48.274557] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:00:41.339 ************************************ 01:00:41.339 START TEST lvs_grow_clean 01:00:41.339 ************************************ 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:00:41.339 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:00:41.599 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:00:41.599 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:00:41.859 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 01:00:41.859 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 01:00:41.859 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:00:42.119 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:00:42.119 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:00:42.119 10:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 lvol 150 01:00:42.379 10:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=732da9f1-da62-47fb-96b6-c3c665c8f74a 01:00:42.379 10:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:00:42.379 10:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:00:42.379 [2024-12-09 10:11:49.354231] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:00:42.379 [2024-12-09 10:11:49.354396] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:00:42.379 true 01:00:42.379 10:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 01:00:42.379 10:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:00:42.639 10:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:00:42.639 10:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:00:42.899 10:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 732da9f1-da62-47fb-96b6-c3c665c8f74a 01:00:43.159 10:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:00:43.159 [2024-12-09 10:11:50.170763] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:00:43.159 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:00:43.420 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=103907 01:00:43.420 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:00:43.420 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:00:43.420 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 103907 /var/tmp/bdevperf.sock 01:00:43.420 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 103907 ']' 01:00:43.420 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:00:43.420 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:43.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:00:43.420 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:00:43.420 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:43.420 10:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:00:43.420 [2024-12-09 10:11:50.435145] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:00:43.420 [2024-12-09 10:11:50.435204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103907 ] 01:00:43.680 [2024-12-09 10:11:50.582600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:43.680 [2024-12-09 10:11:50.627933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:00:44.617 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:44.617 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 01:00:44.617 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:00:44.617 Nvme0n1 01:00:44.617 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:00:44.877 [ 01:00:44.877 { 01:00:44.877 "aliases": [ 01:00:44.877 "732da9f1-da62-47fb-96b6-c3c665c8f74a" 01:00:44.877 ], 01:00:44.877 "assigned_rate_limits": { 01:00:44.877 "r_mbytes_per_sec": 0, 01:00:44.877 "rw_ios_per_sec": 0, 01:00:44.877 "rw_mbytes_per_sec": 0, 01:00:44.877 "w_mbytes_per_sec": 0 01:00:44.877 }, 01:00:44.877 "block_size": 4096, 01:00:44.877 "claimed": false, 01:00:44.877 "driver_specific": { 01:00:44.877 "mp_policy": "active_passive", 01:00:44.877 "nvme": [ 01:00:44.877 { 01:00:44.877 "ctrlr_data": { 01:00:44.877 "ana_reporting": false, 01:00:44.877 "cntlid": 1, 01:00:44.877 "firmware_revision": "25.01", 01:00:44.877 "model_number": "SPDK bdev Controller", 01:00:44.877 "multi_ctrlr": true, 01:00:44.877 "oacs": { 01:00:44.877 "firmware": 0, 01:00:44.877 "format": 0, 01:00:44.877 "ns_manage": 0, 01:00:44.877 "security": 0 01:00:44.877 }, 01:00:44.877 "serial_number": "SPDK0", 01:00:44.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:00:44.877 "vendor_id": "0x8086" 01:00:44.877 }, 01:00:44.877 "ns_data": { 01:00:44.877 "can_share": true, 01:00:44.877 "id": 1 01:00:44.877 }, 01:00:44.877 "trid": { 01:00:44.877 "adrfam": "IPv4", 01:00:44.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:00:44.877 "traddr": "10.0.0.3", 01:00:44.877 "trsvcid": "4420", 01:00:44.877 "trtype": "TCP" 01:00:44.877 }, 01:00:44.877 "vs": { 01:00:44.877 "nvme_version": "1.3" 01:00:44.877 } 01:00:44.877 } 01:00:44.877 ] 01:00:44.877 }, 01:00:44.877 "memory_domains": [ 01:00:44.877 { 01:00:44.877 "dma_device_id": "system", 01:00:44.877 "dma_device_type": 1 01:00:44.877 } 01:00:44.877 ], 01:00:44.877 "name": "Nvme0n1", 01:00:44.877 "num_blocks": 38912, 01:00:44.877 "numa_id": -1, 01:00:44.877 "product_name": "NVMe disk", 01:00:44.877 "supported_io_types": { 01:00:44.877 "abort": true, 01:00:44.877 "compare": true, 01:00:44.877 "compare_and_write": true, 01:00:44.877 "copy": true, 01:00:44.877 "flush": true, 01:00:44.877 "get_zone_info": false, 01:00:44.877 "nvme_admin": true, 01:00:44.877 "nvme_io": true, 01:00:44.877 "nvme_io_md": false, 01:00:44.877 "nvme_iov_md": false, 01:00:44.877 "read": true, 01:00:44.877 "reset": true, 01:00:44.877 "seek_data": false, 01:00:44.877 "seek_hole": false, 01:00:44.877 "unmap": true, 01:00:44.877 "write": true, 01:00:44.877 "write_zeroes": true, 01:00:44.877 "zcopy": false, 01:00:44.877 "zone_append": false, 01:00:44.877 "zone_management": false 01:00:44.877 }, 01:00:44.877 "uuid": "732da9f1-da62-47fb-96b6-c3c665c8f74a", 01:00:44.877 "zoned": false 01:00:44.877 } 01:00:44.877 ] 01:00:44.877 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=103955 01:00:44.877 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:00:44.877 10:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:00:44.877 Running I/O for 10 seconds... 01:00:46.258 Latency(us) 01:00:46.258 [2024-12-09T10:11:53.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:46.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:00:46.258 Nvme0n1 : 1.00 11098.00 43.35 0.00 0.00 0.00 0.00 0.00 01:00:46.258 [2024-12-09T10:11:53.302Z] =================================================================================================================== 01:00:46.258 [2024-12-09T10:11:53.302Z] Total : 11098.00 43.35 0.00 0.00 0.00 0.00 0.00 01:00:46.258 01:00:46.827 10:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 01:00:46.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:00:46.827 Nvme0n1 : 2.00 11390.00 44.49 0.00 0.00 0.00 0.00 0.00 01:00:46.827 [2024-12-09T10:11:53.871Z] =================================================================================================================== 01:00:46.827 [2024-12-09T10:11:53.871Z] Total : 11390.00 44.49 0.00 0.00 0.00 0.00 0.00 01:00:46.827 01:00:47.087 true 01:00:47.087 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 01:00:47.087 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:00:47.346 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:00:47.346 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:00:47.346 10:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 103955 01:00:47.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:00:47.912 Nvme0n1 : 3.00 11361.67 44.38 0.00 0.00 0.00 0.00 0.00 01:00:47.912 [2024-12-09T10:11:54.956Z] =================================================================================================================== 01:00:47.912 [2024-12-09T10:11:54.956Z] Total : 11361.67 44.38 0.00 0.00 0.00 0.00 0.00 01:00:47.912 01:00:48.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:00:48.877 Nvme0n1 : 4.00 11378.75 44.45 0.00 0.00 0.00 0.00 0.00 01:00:48.877 [2024-12-09T10:11:55.921Z] =================================================================================================================== 01:00:48.877 [2024-12-09T10:11:55.921Z] Total : 11378.75 44.45 0.00 0.00 0.00 0.00 0.00 01:00:48.877 01:00:49.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:00:49.832 Nvme0n1 : 5.00 11377.40 44.44 0.00 0.00 0.00 0.00 0.00 01:00:49.832 [2024-12-09T10:11:56.876Z] =================================================================================================================== 01:00:49.832 [2024-12-09T10:11:56.876Z] Total : 11377.40 44.44 0.00 0.00 0.00 0.00 0.00 01:00:49.832 01:00:51.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:00:51.211 Nvme0n1 : 6.00 11301.17 44.15 0.00 0.00 0.00 0.00 0.00 01:00:51.211 [2024-12-09T10:11:58.255Z] =================================================================================================================== 01:00:51.211 [2024-12-09T10:11:58.255Z] Total : 11301.17 44.15 0.00 0.00 0.00 0.00 0.00 01:00:51.211 01:00:52.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:00:52.147 Nvme0n1 : 7.00 11319.14 44.22 0.00 0.00 0.00 0.00 0.00 01:00:52.147 [2024-12-09T10:11:59.191Z] =================================================================================================================== 01:00:52.147 [2024-12-09T10:11:59.191Z] Total : 11319.14 44.22 0.00 0.00 0.00 0.00 0.00 01:00:52.147 01:00:53.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:00:53.086 Nvme0n1 : 8.00 11301.25 44.15 0.00 0.00 0.00 0.00 0.00 01:00:53.086 [2024-12-09T10:12:00.130Z] =================================================================================================================== 01:00:53.086 [2024-12-09T10:12:00.130Z] Total : 11301.25 44.15 0.00 0.00 0.00 0.00 0.00 01:00:53.086 01:00:54.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:00:54.024 Nvme0n1 : 9.00 11188.56 43.71 0.00 0.00 0.00 0.00 0.00 01:00:54.024 [2024-12-09T10:12:01.068Z] =================================================================================================================== 01:00:54.024 [2024-12-09T10:12:01.068Z] Total : 11188.56 43.71 0.00 0.00 0.00 0.00 0.00 01:00:54.024 01:00:54.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:00:54.962 Nvme0n1 : 10.00 11108.10 43.39 0.00 0.00 0.00 0.00 0.00 01:00:54.962 [2024-12-09T10:12:02.006Z] =================================================================================================================== 01:00:54.962 [2024-12-09T10:12:02.006Z] Total : 11108.10 43.39 0.00 0.00 0.00 0.00 0.00 01:00:54.962 01:00:54.962 01:00:54.962 Latency(us) 01:00:54.962 [2024-12-09T10:12:02.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:54.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:00:54.962 Nvme0n1 : 10.01 11113.52 43.41 0.00 0.00 11513.37 4979.59 32968.33 01:00:54.962 [2024-12-09T10:12:02.006Z] =================================================================================================================== 01:00:54.962 [2024-12-09T10:12:02.006Z] Total : 11113.52 43.41 0.00 0.00 11513.37 4979.59 32968.33 01:00:54.962 { 01:00:54.962 "results": [ 01:00:54.962 { 01:00:54.962 "job": "Nvme0n1", 01:00:54.962 "core_mask": "0x2", 01:00:54.962 "workload": "randwrite", 01:00:54.962 "status": "finished", 01:00:54.962 "queue_depth": 128, 01:00:54.962 "io_size": 4096, 01:00:54.962 "runtime": 10.006637, 01:00:54.962 "iops": 11113.52395415163, 01:00:54.962 "mibps": 43.4122029459048, 01:00:54.962 "io_failed": 0, 01:00:54.962 "io_timeout": 0, 01:00:54.962 "avg_latency_us": 11513.371066626547, 01:00:54.962 "min_latency_us": 4979.591266375546, 01:00:54.962 "max_latency_us": 32968.32838427948 01:00:54.962 } 01:00:54.962 ], 01:00:54.962 "core_count": 1 01:00:54.962 } 01:00:54.962 10:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 103907 01:00:54.962 10:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 103907 ']' 01:00:54.962 10:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 103907 01:00:54.962 10:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 01:00:54.962 10:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:54.963 10:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103907 01:00:54.963 10:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:00:54.963 10:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:00:54.963 killing process with pid 103907 01:00:54.963 10:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103907' 01:00:54.963 10:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 103907 01:00:54.963 Received shutdown signal, test time was about 10.000000 seconds 01:00:54.963 01:00:54.963 Latency(us) 01:00:54.963 [2024-12-09T10:12:02.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:00:54.963 [2024-12-09T10:12:02.007Z] =================================================================================================================== 01:00:54.963 [2024-12-09T10:12:02.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:00:54.963 10:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 103907 01:00:55.222 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:00:55.481 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:00:55.741 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 01:00:55.741 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:00:55.741 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:00:55.741 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 01:00:55.741 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:00:55.999 [2024-12-09 10:12:02.878347] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:00:55.999 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 01:00:55.999 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 01:00:55.999 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 01:00:56.000 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:00:56.000 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:56.000 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:00:56.000 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:56.000 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:00:56.000 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:56.000 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:00:56.000 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:00:56.000 10:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 01:00:56.259 2024/12/09 10:12:03 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:2a71ba09-97a0-4d4d-a6c0-62c7f16d1953], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 01:00:56.259 request: 01:00:56.259 { 01:00:56.259 "method": "bdev_lvol_get_lvstores", 01:00:56.259 "params": { 01:00:56.259 "uuid": "2a71ba09-97a0-4d4d-a6c0-62c7f16d1953" 01:00:56.259 } 01:00:56.259 } 01:00:56.259 Got JSON-RPC error response 01:00:56.259 GoRPCClient: error on JSON-RPC call 01:00:56.259 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 01:00:56.259 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:56.259 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:00:56.259 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:56.259 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:00:56.518 aio_bdev 01:00:56.518 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 732da9f1-da62-47fb-96b6-c3c665c8f74a 01:00:56.519 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=732da9f1-da62-47fb-96b6-c3c665c8f74a 01:00:56.519 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:00:56.519 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 01:00:56.519 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:00:56.519 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:00:56.519 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:00:56.778 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 732da9f1-da62-47fb-96b6-c3c665c8f74a -t 2000 01:00:56.778 [ 01:00:56.778 { 01:00:56.778 "aliases": [ 01:00:56.778 "lvs/lvol" 01:00:56.778 ], 01:00:56.778 "assigned_rate_limits": { 01:00:56.778 "r_mbytes_per_sec": 0, 01:00:56.778 "rw_ios_per_sec": 0, 01:00:56.778 "rw_mbytes_per_sec": 0, 01:00:56.778 "w_mbytes_per_sec": 0 01:00:56.778 }, 01:00:56.778 "block_size": 4096, 01:00:56.778 "claimed": false, 01:00:56.778 "driver_specific": { 01:00:56.778 "lvol": { 01:00:56.778 "base_bdev": "aio_bdev", 01:00:56.778 "clone": false, 01:00:56.778 "esnap_clone": false, 01:00:56.778 "lvol_store_uuid": "2a71ba09-97a0-4d4d-a6c0-62c7f16d1953", 01:00:56.778 "num_allocated_clusters": 38, 01:00:56.778 "snapshot": false, 01:00:56.778 "thin_provision": false 01:00:56.778 } 01:00:56.778 }, 01:00:56.778 "name": "732da9f1-da62-47fb-96b6-c3c665c8f74a", 01:00:56.778 "num_blocks": 38912, 01:00:56.778 "product_name": "Logical Volume", 01:00:56.778 "supported_io_types": { 01:00:56.778 "abort": false, 01:00:56.778 "compare": false, 01:00:56.778 "compare_and_write": false, 01:00:56.778 "copy": false, 01:00:56.778 "flush": false, 01:00:56.778 "get_zone_info": false, 01:00:56.778 "nvme_admin": false, 01:00:56.778 "nvme_io": false, 01:00:56.778 "nvme_io_md": false, 01:00:56.779 "nvme_iov_md": false, 01:00:56.779 "read": true, 01:00:56.779 "reset": true, 01:00:56.779 "seek_data": true, 01:00:56.779 "seek_hole": true, 01:00:56.779 "unmap": true, 01:00:56.779 "write": true, 01:00:56.779 "write_zeroes": true, 01:00:56.779 "zcopy": false, 01:00:56.779 "zone_append": false, 01:00:56.779 "zone_management": false 01:00:56.779 }, 01:00:56.779 "uuid": "732da9f1-da62-47fb-96b6-c3c665c8f74a", 01:00:56.779 "zoned": false 01:00:56.779 } 01:00:56.779 ] 01:00:56.779 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 01:00:56.779 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 01:00:56.779 10:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:00:57.039 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:00:57.039 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 01:00:57.039 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:00:57.298 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:00:57.298 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 732da9f1-da62-47fb-96b6-c3c665c8f74a 01:00:57.557 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a71ba09-97a0-4d4d-a6c0-62c7f16d1953 01:00:57.816 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:00:58.075 10:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:00:58.333 01:00:58.333 real 0m16.995s 01:00:58.333 user 0m16.215s 01:00:58.333 sys 0m2.054s 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:00:58.333 ************************************ 01:00:58.333 END TEST lvs_grow_clean 01:00:58.333 ************************************ 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:00:58.333 ************************************ 01:00:58.333 START TEST lvs_grow_dirty 01:00:58.333 ************************************ 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:00:58.333 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:00:58.593 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:00:58.593 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:00:58.593 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:00:58.852 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9eceb961-bfe8-40ee-93f3-22b54686d584 01:00:58.852 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:00:58.852 10:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:00:59.112 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:00:59.112 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:00:59.112 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9eceb961-bfe8-40ee-93f3-22b54686d584 lvol 150 01:00:59.372 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173 01:00:59.372 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:00:59.372 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:00:59.632 [2024-12-09 10:12:06.446215] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:00:59.632 [2024-12-09 10:12:06.446382] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:00:59.632 true 01:00:59.632 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:00:59.632 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:00:59.632 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:00:59.632 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:00:59.892 10:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173 01:01:00.151 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:01:00.411 [2024-12-09 10:12:07.258736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:01:00.411 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:01:00.670 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:01:00.670 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=104337 01:01:00.670 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:01:00.670 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 104337 /var/tmp/bdevperf.sock 01:01:00.670 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 104337 ']' 01:01:00.670 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:01:00.670 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:00.670 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:01:00.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:01:00.670 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:00.670 10:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:01:00.670 [2024-12-09 10:12:07.518975] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:01:00.670 [2024-12-09 10:12:07.519069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104337 ] 01:01:00.670 [2024-12-09 10:12:07.665498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:00.670 [2024-12-09 10:12:07.710963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:01:01.610 10:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:01.610 10:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:01:01.610 10:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:01:01.870 Nvme0n1 01:01:01.870 10:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:01:01.870 [ 01:01:01.870 { 01:01:01.870 "aliases": [ 01:01:01.870 "b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173" 01:01:01.870 ], 01:01:01.870 "assigned_rate_limits": { 01:01:01.870 "r_mbytes_per_sec": 0, 01:01:01.870 "rw_ios_per_sec": 0, 01:01:01.870 "rw_mbytes_per_sec": 0, 01:01:01.870 "w_mbytes_per_sec": 0 01:01:01.870 }, 01:01:01.870 "block_size": 4096, 01:01:01.870 "claimed": false, 01:01:01.870 "driver_specific": { 01:01:01.870 "mp_policy": "active_passive", 01:01:01.870 "nvme": [ 01:01:01.870 { 01:01:01.870 "ctrlr_data": { 01:01:01.870 "ana_reporting": false, 01:01:01.870 "cntlid": 1, 01:01:01.870 "firmware_revision": "25.01", 01:01:01.870 "model_number": "SPDK bdev Controller", 01:01:01.870 "multi_ctrlr": true, 01:01:01.870 "oacs": { 01:01:01.870 "firmware": 0, 01:01:01.870 "format": 0, 01:01:01.870 "ns_manage": 0, 01:01:01.870 "security": 0 01:01:01.870 }, 01:01:01.870 "serial_number": "SPDK0", 01:01:01.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:01:01.870 "vendor_id": "0x8086" 01:01:01.870 }, 01:01:01.870 "ns_data": { 01:01:01.870 "can_share": true, 01:01:01.870 "id": 1 01:01:01.870 }, 01:01:01.870 "trid": { 01:01:01.870 "adrfam": "IPv4", 01:01:01.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:01:01.870 "traddr": "10.0.0.3", 01:01:01.870 "trsvcid": "4420", 01:01:01.870 "trtype": "TCP" 01:01:01.870 }, 01:01:01.870 "vs": { 01:01:01.870 "nvme_version": "1.3" 01:01:01.870 } 01:01:01.870 } 01:01:01.870 ] 01:01:01.870 }, 01:01:01.870 "memory_domains": [ 01:01:01.870 { 01:01:01.870 "dma_device_id": "system", 01:01:01.870 "dma_device_type": 1 01:01:01.870 } 01:01:01.870 ], 01:01:01.870 "name": "Nvme0n1", 01:01:01.870 "num_blocks": 38912, 01:01:01.870 "numa_id": -1, 01:01:01.870 "product_name": "NVMe disk", 01:01:01.870 "supported_io_types": { 01:01:01.870 "abort": true, 01:01:01.870 "compare": true, 01:01:01.870 "compare_and_write": true, 01:01:01.870 "copy": true, 01:01:01.870 "flush": true, 01:01:01.870 "get_zone_info": false, 01:01:01.870 "nvme_admin": true, 01:01:01.870 "nvme_io": true, 01:01:01.870 "nvme_io_md": false, 01:01:01.870 "nvme_iov_md": false, 01:01:01.870 "read": true, 01:01:01.870 "reset": true, 01:01:01.870 "seek_data": false, 01:01:01.870 "seek_hole": false, 01:01:01.870 "unmap": true, 01:01:01.870 "write": true, 01:01:01.870 "write_zeroes": true, 01:01:01.870 "zcopy": false, 01:01:01.870 "zone_append": false, 01:01:01.870 "zone_management": false 01:01:01.870 }, 01:01:01.870 "uuid": "b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173", 01:01:01.870 "zoned": false 01:01:01.870 } 01:01:01.870 ] 01:01:01.870 10:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=104379 01:01:01.870 10:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:01:01.870 10:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:01:02.130 Running I/O for 10 seconds... 01:01:03.106 Latency(us) 01:01:03.106 [2024-12-09T10:12:10.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:03.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:01:03.106 Nvme0n1 : 1.00 11087.00 43.31 0.00 0.00 0.00 0.00 0.00 01:01:03.106 [2024-12-09T10:12:10.150Z] =================================================================================================================== 01:01:03.106 [2024-12-09T10:12:10.150Z] Total : 11087.00 43.31 0.00 0.00 0.00 0.00 0.00 01:01:03.106 01:01:04.045 10:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:01:04.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:01:04.045 Nvme0n1 : 2.00 11153.50 43.57 0.00 0.00 0.00 0.00 0.00 01:01:04.045 [2024-12-09T10:12:11.089Z] =================================================================================================================== 01:01:04.045 [2024-12-09T10:12:11.089Z] Total : 11153.50 43.57 0.00 0.00 0.00 0.00 0.00 01:01:04.045 01:01:04.305 true 01:01:04.305 10:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:01:04.305 10:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:01:04.565 10:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:01:04.565 10:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:01:04.565 10:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 104379 01:01:05.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:01:05.134 Nvme0n1 : 3.00 10979.00 42.89 0.00 0.00 0.00 0.00 0.00 01:01:05.134 [2024-12-09T10:12:12.178Z] =================================================================================================================== 01:01:05.134 [2024-12-09T10:12:12.178Z] Total : 10979.00 42.89 0.00 0.00 0.00 0.00 0.00 01:01:05.134 01:01:06.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:01:06.073 Nvme0n1 : 4.00 11120.00 43.44 0.00 0.00 0.00 0.00 0.00 01:01:06.073 [2024-12-09T10:12:13.117Z] =================================================================================================================== 01:01:06.073 [2024-12-09T10:12:13.117Z] Total : 11120.00 43.44 0.00 0.00 0.00 0.00 0.00 01:01:06.073 01:01:07.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:01:07.011 Nvme0n1 : 5.00 11214.00 43.80 0.00 0.00 0.00 0.00 0.00 01:01:07.011 [2024-12-09T10:12:14.055Z] =================================================================================================================== 01:01:07.011 [2024-12-09T10:12:14.055Z] Total : 11214.00 43.80 0.00 0.00 0.00 0.00 0.00 01:01:07.011 01:01:07.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:01:07.949 Nvme0n1 : 6.00 11377.17 44.44 0.00 0.00 0.00 0.00 0.00 01:01:07.949 [2024-12-09T10:12:14.993Z] =================================================================================================================== 01:01:07.949 [2024-12-09T10:12:14.993Z] Total : 11377.17 44.44 0.00 0.00 0.00 0.00 0.00 01:01:07.949 01:01:09.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:01:09.332 Nvme0n1 : 7.00 11494.57 44.90 0.00 0.00 0.00 0.00 0.00 01:01:09.332 [2024-12-09T10:12:16.376Z] =================================================================================================================== 01:01:09.332 [2024-12-09T10:12:16.376Z] Total : 11494.57 44.90 0.00 0.00 0.00 0.00 0.00 01:01:09.332 01:01:10.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:01:10.270 Nvme0n1 : 8.00 11576.50 45.22 0.00 0.00 0.00 0.00 0.00 01:01:10.270 [2024-12-09T10:12:17.315Z] =================================================================================================================== 01:01:10.271 [2024-12-09T10:12:17.315Z] Total : 11576.50 45.22 0.00 0.00 0.00 0.00 0.00 01:01:10.271 01:01:11.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:01:11.208 Nvme0n1 : 9.00 11246.78 43.93 0.00 0.00 0.00 0.00 0.00 01:01:11.208 [2024-12-09T10:12:18.252Z] =================================================================================================================== 01:01:11.208 [2024-12-09T10:12:18.252Z] Total : 11246.78 43.93 0.00 0.00 0.00 0.00 0.00 01:01:11.208 01:01:12.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:01:12.147 Nvme0n1 : 10.00 11061.20 43.21 0.00 0.00 0.00 0.00 0.00 01:01:12.147 [2024-12-09T10:12:19.191Z] =================================================================================================================== 01:01:12.147 [2024-12-09T10:12:19.191Z] Total : 11061.20 43.21 0.00 0.00 0.00 0.00 0.00 01:01:12.147 01:01:12.147 01:01:12.147 Latency(us) 01:01:12.147 [2024-12-09T10:12:19.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:12.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:01:12.147 Nvme0n1 : 10.00 11065.15 43.22 0.00 0.00 11563.77 4407.22 498188.07 01:01:12.147 [2024-12-09T10:12:19.191Z] =================================================================================================================== 01:01:12.147 [2024-12-09T10:12:19.191Z] Total : 11065.15 43.22 0.00 0.00 11563.77 4407.22 498188.07 01:01:12.147 { 01:01:12.147 "results": [ 01:01:12.147 { 01:01:12.147 "job": "Nvme0n1", 01:01:12.147 "core_mask": "0x2", 01:01:12.147 "workload": "randwrite", 01:01:12.147 "status": "finished", 01:01:12.147 "queue_depth": 128, 01:01:12.147 "io_size": 4096, 01:01:12.147 "runtime": 10.002213, 01:01:12.147 "iops": 11065.151282021288, 01:01:12.147 "mibps": 43.22324719539566, 01:01:12.147 "io_failed": 0, 01:01:12.147 "io_timeout": 0, 01:01:12.147 "avg_latency_us": 11563.768591305736, 01:01:12.147 "min_latency_us": 4407.2244541484715, 01:01:12.147 "max_latency_us": 498188.0733624454 01:01:12.147 } 01:01:12.147 ], 01:01:12.147 "core_count": 1 01:01:12.147 } 01:01:12.147 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 104337 01:01:12.147 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 104337 ']' 01:01:12.147 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 104337 01:01:12.147 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 01:01:12.147 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:12.147 10:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104337 01:01:12.147 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:01:12.147 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:01:12.147 killing process with pid 104337 01:01:12.147 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104337' 01:01:12.147 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 104337 01:01:12.147 Received shutdown signal, test time was about 10.000000 seconds 01:01:12.147 01:01:12.147 Latency(us) 01:01:12.147 [2024-12-09T10:12:19.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:12.147 [2024-12-09T10:12:19.191Z] =================================================================================================================== 01:01:12.147 [2024-12-09T10:12:19.191Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:01:12.147 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 104337 01:01:12.406 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:01:12.406 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:01:12.665 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:01:12.666 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:01:12.923 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:01:12.923 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 103751 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 103751 01:01:12.924 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 103751 Killed "${NVMF_APP[@]}" "$@" 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=104541 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 104541 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 104541 ']' 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:12.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:12.924 10:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:01:12.924 [2024-12-09 10:12:19.905789] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:01:12.924 [2024-12-09 10:12:19.906566] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:01:12.924 [2024-12-09 10:12:19.906610] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:01:13.182 [2024-12-09 10:12:20.057367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:13.182 [2024-12-09 10:12:20.121231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:01:13.182 [2024-12-09 10:12:20.121284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:01:13.182 [2024-12-09 10:12:20.121292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:01:13.182 [2024-12-09 10:12:20.121297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:01:13.182 [2024-12-09 10:12:20.121301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:01:13.182 [2024-12-09 10:12:20.121634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:13.442 [2024-12-09 10:12:20.247123] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:01:13.442 [2024-12-09 10:12:20.247379] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:01:14.011 10:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:14.011 10:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:01:14.011 10:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:01:14.011 10:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:14.012 10:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:01:14.012 10:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:14.012 10:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:01:14.012 [2024-12-09 10:12:21.013954] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 01:01:14.012 [2024-12-09 10:12:21.014431] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 01:01:14.012 [2024-12-09 10:12:21.014733] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 01:01:14.271 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 01:01:14.271 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173 01:01:14.271 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173 01:01:14.271 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:01:14.271 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:01:14.271 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:01:14.271 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:01:14.271 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:01:14.271 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173 -t 2000 01:01:14.530 [ 01:01:14.530 { 01:01:14.530 "aliases": [ 01:01:14.530 "lvs/lvol" 01:01:14.530 ], 01:01:14.530 "assigned_rate_limits": { 01:01:14.530 "r_mbytes_per_sec": 0, 01:01:14.530 "rw_ios_per_sec": 0, 01:01:14.530 "rw_mbytes_per_sec": 0, 01:01:14.530 "w_mbytes_per_sec": 0 01:01:14.530 }, 01:01:14.530 "block_size": 4096, 01:01:14.530 "claimed": false, 01:01:14.530 "driver_specific": { 01:01:14.530 "lvol": { 01:01:14.530 "base_bdev": "aio_bdev", 01:01:14.530 "clone": false, 01:01:14.530 "esnap_clone": false, 01:01:14.530 "lvol_store_uuid": "9eceb961-bfe8-40ee-93f3-22b54686d584", 01:01:14.530 "num_allocated_clusters": 38, 01:01:14.530 "snapshot": false, 01:01:14.530 "thin_provision": false 01:01:14.530 } 01:01:14.530 }, 01:01:14.530 "name": "b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173", 01:01:14.530 "num_blocks": 38912, 01:01:14.530 "product_name": "Logical Volume", 01:01:14.530 "supported_io_types": { 01:01:14.530 "abort": false, 01:01:14.530 "compare": false, 01:01:14.530 "compare_and_write": false, 01:01:14.530 "copy": false, 01:01:14.530 "flush": false, 01:01:14.530 "get_zone_info": false, 01:01:14.530 "nvme_admin": false, 01:01:14.530 "nvme_io": false, 01:01:14.530 "nvme_io_md": false, 01:01:14.530 "nvme_iov_md": false, 01:01:14.530 "read": true, 01:01:14.530 "reset": true, 01:01:14.530 "seek_data": true, 01:01:14.530 "seek_hole": true, 01:01:14.530 "unmap": true, 01:01:14.531 "write": true, 01:01:14.531 "write_zeroes": true, 01:01:14.531 "zcopy": false, 01:01:14.531 "zone_append": false, 01:01:14.531 "zone_management": false 01:01:14.531 }, 01:01:14.531 "uuid": "b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173", 01:01:14.531 "zoned": false 01:01:14.531 } 01:01:14.531 ] 01:01:14.531 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:01:14.531 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:01:14.531 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 01:01:14.790 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 01:01:14.790 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 01:01:14.790 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:01:15.049 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 01:01:15.049 10:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:01:15.049 [2024-12-09 10:12:22.090318] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:01:15.308 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:01:15.309 2024/12/09 10:12:22 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:9eceb961-bfe8-40ee-93f3-22b54686d584], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 01:01:15.309 request: 01:01:15.309 { 01:01:15.309 "method": "bdev_lvol_get_lvstores", 01:01:15.309 "params": { 01:01:15.309 "uuid": "9eceb961-bfe8-40ee-93f3-22b54686d584" 01:01:15.309 } 01:01:15.309 } 01:01:15.309 Got JSON-RPC error response 01:01:15.309 GoRPCClient: error on JSON-RPC call 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:15.309 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:01:15.568 aio_bdev 01:01:15.568 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173 01:01:15.568 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173 01:01:15.568 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:01:15.568 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:01:15.568 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:01:15.568 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:01:15.569 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:01:15.828 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173 -t 2000 01:01:16.087 [ 01:01:16.087 { 01:01:16.087 "aliases": [ 01:01:16.087 "lvs/lvol" 01:01:16.087 ], 01:01:16.087 "assigned_rate_limits": { 01:01:16.087 "r_mbytes_per_sec": 0, 01:01:16.087 "rw_ios_per_sec": 0, 01:01:16.087 "rw_mbytes_per_sec": 0, 01:01:16.087 "w_mbytes_per_sec": 0 01:01:16.087 }, 01:01:16.087 "block_size": 4096, 01:01:16.088 "claimed": false, 01:01:16.088 "driver_specific": { 01:01:16.088 "lvol": { 01:01:16.088 "base_bdev": "aio_bdev", 01:01:16.088 "clone": false, 01:01:16.088 "esnap_clone": false, 01:01:16.088 "lvol_store_uuid": "9eceb961-bfe8-40ee-93f3-22b54686d584", 01:01:16.088 "num_allocated_clusters": 38, 01:01:16.088 "snapshot": false, 01:01:16.088 "thin_provision": false 01:01:16.088 } 01:01:16.088 }, 01:01:16.088 "name": "b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173", 01:01:16.088 "num_blocks": 38912, 01:01:16.088 "product_name": "Logical Volume", 01:01:16.088 "supported_io_types": { 01:01:16.088 "abort": false, 01:01:16.088 "compare": false, 01:01:16.088 "compare_and_write": false, 01:01:16.088 "copy": false, 01:01:16.088 "flush": false, 01:01:16.088 "get_zone_info": false, 01:01:16.088 "nvme_admin": false, 01:01:16.088 "nvme_io": false, 01:01:16.088 "nvme_io_md": false, 01:01:16.088 "nvme_iov_md": false, 01:01:16.088 "read": true, 01:01:16.088 "reset": true, 01:01:16.088 "seek_data": true, 01:01:16.088 "seek_hole": true, 01:01:16.088 "unmap": true, 01:01:16.088 "write": true, 01:01:16.088 "write_zeroes": true, 01:01:16.088 "zcopy": false, 01:01:16.088 "zone_append": false, 01:01:16.088 "zone_management": false 01:01:16.088 }, 01:01:16.088 "uuid": "b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173", 01:01:16.088 "zoned": false 01:01:16.088 } 01:01:16.088 ] 01:01:16.088 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:01:16.088 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:01:16.088 10:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:01:16.347 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:01:16.347 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:01:16.347 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:01:16.347 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:01:16.347 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b9cc4a61-6ee1-4dd6-8af1-048a0e4b3173 01:01:16.607 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9eceb961-bfe8-40ee-93f3-22b54686d584 01:01:16.873 10:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:01:17.142 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:01:17.713 01:01:17.713 real 0m19.080s 01:01:17.713 user 0m26.421s 01:01:17.713 sys 0m7.123s 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:01:17.713 ************************************ 01:01:17.713 END TEST lvs_grow_dirty 01:01:17.713 ************************************ 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:01:17.713 nvmf_trace.0 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 01:01:17.713 10:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:01:19.091 rmmod nvme_tcp 01:01:19.091 rmmod nvme_fabrics 01:01:19.091 rmmod nvme_keyring 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 104541 ']' 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 104541 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 104541 ']' 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 104541 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104541 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104541' 01:01:19.091 killing process with pid 104541 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 104541 01:01:19.091 10:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 104541 01:01:19.091 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:01:19.091 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:01:19.091 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:01:19.091 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 01:01:19.091 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 01:01:19.091 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:01:19.091 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 01:01:19.092 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:01:19.092 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:01:19.092 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:01:19.092 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 01:01:19.349 01:01:19.349 real 0m39.918s 01:01:19.349 user 0m44.009s 01:01:19.349 sys 0m11.174s 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:19.349 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:01:19.349 ************************************ 01:01:19.349 END TEST nvmf_lvs_grow 01:01:19.349 ************************************ 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:01:19.609 ************************************ 01:01:19.609 START TEST nvmf_bdev_io_wait 01:01:19.609 ************************************ 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 01:01:19.609 * Looking for test storage... 01:01:19.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:19.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:19.609 --rc genhtml_branch_coverage=1 01:01:19.609 --rc genhtml_function_coverage=1 01:01:19.609 --rc genhtml_legend=1 01:01:19.609 --rc geninfo_all_blocks=1 01:01:19.609 --rc geninfo_unexecuted_blocks=1 01:01:19.609 01:01:19.609 ' 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:19.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:19.609 --rc genhtml_branch_coverage=1 01:01:19.609 --rc genhtml_function_coverage=1 01:01:19.609 --rc genhtml_legend=1 01:01:19.609 --rc geninfo_all_blocks=1 01:01:19.609 --rc geninfo_unexecuted_blocks=1 01:01:19.609 01:01:19.609 ' 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:19.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:19.609 --rc genhtml_branch_coverage=1 01:01:19.609 --rc genhtml_function_coverage=1 01:01:19.609 --rc genhtml_legend=1 01:01:19.609 --rc geninfo_all_blocks=1 01:01:19.609 --rc geninfo_unexecuted_blocks=1 01:01:19.609 01:01:19.609 ' 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:19.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:19.609 --rc genhtml_branch_coverage=1 01:01:19.609 --rc genhtml_function_coverage=1 01:01:19.609 --rc genhtml_legend=1 01:01:19.609 --rc geninfo_all_blocks=1 01:01:19.609 --rc geninfo_unexecuted_blocks=1 01:01:19.609 01:01:19.609 ' 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:19.609 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:01:19.610 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:01:19.870 Cannot find device "nvmf_init_br" 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:01:19.870 Cannot find device "nvmf_init_br2" 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:01:19.870 Cannot find device "nvmf_tgt_br" 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:01:19.870 Cannot find device "nvmf_tgt_br2" 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:01:19.870 Cannot find device "nvmf_init_br" 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:01:19.870 Cannot find device "nvmf_init_br2" 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:01:19.870 Cannot find device "nvmf_tgt_br" 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:01:19.870 Cannot find device "nvmf_tgt_br2" 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:01:19.870 Cannot find device "nvmf_br" 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:01:19.870 Cannot find device "nvmf_init_if" 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:01:19.870 Cannot find device "nvmf_init_if2" 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:19.870 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:19.870 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:01:19.870 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:01:20.130 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:01:20.130 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:01:20.130 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:01:20.130 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:01:20.130 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:01:20.130 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:01:20.130 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:01:20.130 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:01:20.130 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:01:20.131 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:01:20.131 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:01:20.131 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:01:20.131 10:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:01:20.131 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:01:20.131 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 01:01:20.131 01:01:20.131 --- 10.0.0.3 ping statistics --- 01:01:20.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:20.131 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:01:20.131 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:01:20.131 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.120 ms 01:01:20.131 01:01:20.131 --- 10.0.0.4 ping statistics --- 01:01:20.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:20.131 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:01:20.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:20.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 01:01:20.131 01:01:20.131 --- 10.0.0.1 ping statistics --- 01:01:20.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:20.131 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:01:20.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:20.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 01:01:20.131 01:01:20.131 --- 10.0.0.2 ping statistics --- 01:01:20.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:20.131 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=105022 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 105022 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 105022 ']' 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:20.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:20.131 10:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:01:20.391 [2024-12-09 10:12:27.199541] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:01:20.391 [2024-12-09 10:12:27.200306] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:01:20.391 [2024-12-09 10:12:27.200352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:01:20.391 [2024-12-09 10:12:27.353466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:01:20.650 [2024-12-09 10:12:27.435951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:01:20.650 [2024-12-09 10:12:27.435998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:01:20.650 [2024-12-09 10:12:27.436005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:01:20.650 [2024-12-09 10:12:27.436010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:01:20.650 [2024-12-09 10:12:27.436015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:01:20.650 [2024-12-09 10:12:27.437431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:01:20.650 [2024-12-09 10:12:27.437614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:20.650 [2024-12-09 10:12:27.437566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:01:20.650 [2024-12-09 10:12:27.437614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:01:20.650 [2024-12-09 10:12:27.438360] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:21.220 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:01:21.220 [2024-12-09 10:12:28.224249] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:01:21.220 [2024-12-09 10:12:28.224530] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:01:21.220 [2024-12-09 10:12:28.224881] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:01:21.221 [2024-12-09 10:12:28.225035] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:01:21.221 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:21.221 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:01:21.221 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:21.221 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:01:21.221 [2024-12-09 10:12:28.234565] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:01:21.221 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:21.221 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:01:21.221 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:21.221 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:01:21.482 Malloc0 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:01:21.482 [2024-12-09 10:12:28.318864] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=105075 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=105076 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=105078 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=105080 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:01:21.482 { 01:01:21.482 "params": { 01:01:21.482 "name": "Nvme$subsystem", 01:01:21.482 "trtype": "$TEST_TRANSPORT", 01:01:21.482 "traddr": "$NVMF_FIRST_TARGET_IP", 01:01:21.482 "adrfam": "ipv4", 01:01:21.482 "trsvcid": "$NVMF_PORT", 01:01:21.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:01:21.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:01:21.482 "hdgst": ${hdgst:-false}, 01:01:21.482 "ddgst": ${ddgst:-false} 01:01:21.482 }, 01:01:21.482 "method": "bdev_nvme_attach_controller" 01:01:21.482 } 01:01:21.482 EOF 01:01:21.482 )") 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:01:21.482 { 01:01:21.482 "params": { 01:01:21.482 "name": "Nvme$subsystem", 01:01:21.482 "trtype": "$TEST_TRANSPORT", 01:01:21.482 "traddr": "$NVMF_FIRST_TARGET_IP", 01:01:21.482 "adrfam": "ipv4", 01:01:21.482 "trsvcid": "$NVMF_PORT", 01:01:21.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:01:21.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:01:21.482 "hdgst": ${hdgst:-false}, 01:01:21.482 "ddgst": ${ddgst:-false} 01:01:21.482 }, 01:01:21.482 "method": "bdev_nvme_attach_controller" 01:01:21.482 } 01:01:21.482 EOF 01:01:21.482 )") 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:01:21.482 { 01:01:21.482 "params": { 01:01:21.482 "name": "Nvme$subsystem", 01:01:21.482 "trtype": "$TEST_TRANSPORT", 01:01:21.482 "traddr": "$NVMF_FIRST_TARGET_IP", 01:01:21.482 "adrfam": "ipv4", 01:01:21.482 "trsvcid": "$NVMF_PORT", 01:01:21.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:01:21.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:01:21.482 "hdgst": ${hdgst:-false}, 01:01:21.482 "ddgst": ${ddgst:-false} 01:01:21.482 }, 01:01:21.482 "method": "bdev_nvme_attach_controller" 01:01:21.482 } 01:01:21.482 EOF 01:01:21.482 )") 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:01:21.482 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:01:21.482 { 01:01:21.482 "params": { 01:01:21.482 "name": "Nvme$subsystem", 01:01:21.482 "trtype": "$TEST_TRANSPORT", 01:01:21.482 "traddr": "$NVMF_FIRST_TARGET_IP", 01:01:21.482 "adrfam": "ipv4", 01:01:21.482 "trsvcid": "$NVMF_PORT", 01:01:21.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:01:21.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:01:21.482 "hdgst": ${hdgst:-false}, 01:01:21.482 "ddgst": ${ddgst:-false} 01:01:21.482 }, 01:01:21.482 "method": "bdev_nvme_attach_controller" 01:01:21.482 } 01:01:21.483 EOF 01:01:21.483 )") 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:01:21.483 "params": { 01:01:21.483 "name": "Nvme1", 01:01:21.483 "trtype": "tcp", 01:01:21.483 "traddr": "10.0.0.3", 01:01:21.483 "adrfam": "ipv4", 01:01:21.483 "trsvcid": "4420", 01:01:21.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:01:21.483 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:01:21.483 "hdgst": false, 01:01:21.483 "ddgst": false 01:01:21.483 }, 01:01:21.483 "method": "bdev_nvme_attach_controller" 01:01:21.483 }' 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:01:21.483 "params": { 01:01:21.483 "name": "Nvme1", 01:01:21.483 "trtype": "tcp", 01:01:21.483 "traddr": "10.0.0.3", 01:01:21.483 "adrfam": "ipv4", 01:01:21.483 "trsvcid": "4420", 01:01:21.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:01:21.483 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:01:21.483 "hdgst": false, 01:01:21.483 "ddgst": false 01:01:21.483 }, 01:01:21.483 "method": "bdev_nvme_attach_controller" 01:01:21.483 }' 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:01:21.483 "params": { 01:01:21.483 "name": "Nvme1", 01:01:21.483 "trtype": "tcp", 01:01:21.483 "traddr": "10.0.0.3", 01:01:21.483 "adrfam": "ipv4", 01:01:21.483 "trsvcid": "4420", 01:01:21.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:01:21.483 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:01:21.483 "hdgst": false, 01:01:21.483 "ddgst": false 01:01:21.483 }, 01:01:21.483 "method": "bdev_nvme_attach_controller" 01:01:21.483 }' 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:01:21.483 "params": { 01:01:21.483 "name": "Nvme1", 01:01:21.483 "trtype": "tcp", 01:01:21.483 "traddr": "10.0.0.3", 01:01:21.483 "adrfam": "ipv4", 01:01:21.483 "trsvcid": "4420", 01:01:21.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:01:21.483 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:01:21.483 "hdgst": false, 01:01:21.483 "ddgst": false 01:01:21.483 }, 01:01:21.483 "method": "bdev_nvme_attach_controller" 01:01:21.483 }' 01:01:21.483 [2024-12-09 10:12:28.378681] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:01:21.483 [2024-12-09 10:12:28.378746] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 01:01:21.483 [2024-12-09 10:12:28.380379] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:01:21.483 [2024-12-09 10:12:28.380465] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 01:01:21.483 [2024-12-09 10:12:28.390482] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:01:21.483 [2024-12-09 10:12:28.390548] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 01:01:21.483 [2024-12-09 10:12:28.391918] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:01:21.483 [2024-12-09 10:12:28.391978] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 01:01:21.483 10:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 105075 01:01:21.743 [2024-12-09 10:12:28.558581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:21.743 [2024-12-09 10:12:28.604227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:01:21.743 [2024-12-09 10:12:28.635166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:21.743 [2024-12-09 10:12:28.678404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:01:21.743 [2024-12-09 10:12:28.698130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:21.743 Running I/O for 1 seconds... 01:01:21.743 [2024-12-09 10:12:28.735610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:21.743 [2024-12-09 10:12:28.741366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:01:21.743 [2024-12-09 10:12:28.778537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 01:01:22.002 Running I/O for 1 seconds... 01:01:22.002 Running I/O for 1 seconds... 01:01:22.002 Running I/O for 1 seconds... 01:01:22.941 12821.00 IOPS, 50.08 MiB/s 01:01:22.941 Latency(us) 01:01:22.941 [2024-12-09T10:12:29.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:22.941 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 01:01:22.941 Nvme1n1 : 1.01 12873.56 50.29 0.00 0.00 9909.31 3448.51 11790.76 01:01:22.941 [2024-12-09T10:12:29.985Z] =================================================================================================================== 01:01:22.941 [2024-12-09T10:12:29.985Z] Total : 12873.56 50.29 0.00 0.00 9909.31 3448.51 11790.76 01:01:22.941 252680.00 IOPS, 987.03 MiB/s 01:01:22.941 Latency(us) 01:01:22.941 [2024-12-09T10:12:29.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:22.941 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 01:01:22.941 Nvme1n1 : 1.00 252290.39 985.51 0.00 0.00 504.56 215.53 1523.93 01:01:22.941 [2024-12-09T10:12:29.985Z] =================================================================================================================== 01:01:22.941 [2024-12-09T10:12:29.985Z] Total : 252290.39 985.51 0.00 0.00 504.56 215.53 1523.93 01:01:22.941 9561.00 IOPS, 37.35 MiB/s 01:01:22.941 Latency(us) 01:01:22.941 [2024-12-09T10:12:29.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:22.941 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 01:01:22.941 Nvme1n1 : 1.01 9613.48 37.55 0.00 0.00 13258.03 3806.24 16140.74 01:01:22.941 [2024-12-09T10:12:29.985Z] =================================================================================================================== 01:01:22.941 [2024-12-09T10:12:29.985Z] Total : 9613.48 37.55 0.00 0.00 13258.03 3806.24 16140.74 01:01:22.941 10678.00 IOPS, 41.71 MiB/s 01:01:22.941 Latency(us) 01:01:22.941 [2024-12-09T10:12:29.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:22.941 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 01:01:22.941 Nvme1n1 : 1.01 10788.96 42.14 0.00 0.00 11834.80 3434.20 17171.00 01:01:22.941 [2024-12-09T10:12:29.985Z] =================================================================================================================== 01:01:22.941 [2024-12-09T10:12:29.985Z] Total : 10788.96 42.14 0.00 0.00 11834.80 3434.20 17171.00 01:01:22.941 10:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 105076 01:01:23.201 10:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 105078 01:01:23.201 10:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 105080 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:01:23.201 rmmod nvme_tcp 01:01:23.201 rmmod nvme_fabrics 01:01:23.201 rmmod nvme_keyring 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 105022 ']' 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 105022 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 105022 ']' 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 105022 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105022 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105022' 01:01:23.201 killing process with pid 105022 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 105022 01:01:23.201 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 105022 01:01:23.461 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:01:23.461 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:01:23.461 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:01:23.461 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 01:01:23.461 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 01:01:23.461 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:01:23.461 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 01:01:23.462 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:01:23.462 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:01:23.462 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:01:23.462 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:01:23.462 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:01:23.462 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:01:23.462 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:01:23.462 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 01:01:23.722 01:01:23.722 real 0m4.248s 01:01:23.722 user 0m12.291s 01:01:23.722 sys 0m2.466s 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:01:23.722 ************************************ 01:01:23.722 END TEST nvmf_bdev_io_wait 01:01:23.722 ************************************ 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:01:23.722 ************************************ 01:01:23.722 START TEST nvmf_queue_depth 01:01:23.722 ************************************ 01:01:23.722 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 01:01:23.983 * Looking for test storage... 01:01:23.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:23.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:23.983 --rc genhtml_branch_coverage=1 01:01:23.983 --rc genhtml_function_coverage=1 01:01:23.983 --rc genhtml_legend=1 01:01:23.983 --rc geninfo_all_blocks=1 01:01:23.983 --rc geninfo_unexecuted_blocks=1 01:01:23.983 01:01:23.983 ' 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:23.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:23.983 --rc genhtml_branch_coverage=1 01:01:23.983 --rc genhtml_function_coverage=1 01:01:23.983 --rc genhtml_legend=1 01:01:23.983 --rc geninfo_all_blocks=1 01:01:23.983 --rc geninfo_unexecuted_blocks=1 01:01:23.983 01:01:23.983 ' 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:23.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:23.983 --rc genhtml_branch_coverage=1 01:01:23.983 --rc genhtml_function_coverage=1 01:01:23.983 --rc genhtml_legend=1 01:01:23.983 --rc geninfo_all_blocks=1 01:01:23.983 --rc geninfo_unexecuted_blocks=1 01:01:23.983 01:01:23.983 ' 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:23.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:23.983 --rc genhtml_branch_coverage=1 01:01:23.983 --rc genhtml_function_coverage=1 01:01:23.983 --rc genhtml_legend=1 01:01:23.983 --rc geninfo_all_blocks=1 01:01:23.983 --rc geninfo_unexecuted_blocks=1 01:01:23.983 01:01:23.983 ' 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:23.983 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:01:23.984 Cannot find device "nvmf_init_br" 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 01:01:23.984 10:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:01:23.984 Cannot find device "nvmf_init_br2" 01:01:23.984 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 01:01:23.984 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:01:23.984 Cannot find device "nvmf_tgt_br" 01:01:23.984 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 01:01:23.984 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:01:24.244 Cannot find device "nvmf_tgt_br2" 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:01:24.244 Cannot find device "nvmf_init_br" 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:01:24.244 Cannot find device "nvmf_init_br2" 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:01:24.244 Cannot find device "nvmf_tgt_br" 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:01:24.244 Cannot find device "nvmf_tgt_br2" 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:01:24.244 Cannot find device "nvmf_br" 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:01:24.244 Cannot find device "nvmf_init_if" 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:01:24.244 Cannot find device "nvmf_init_if2" 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:24.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:24.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:01:24.244 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:01:24.504 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:01:24.504 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.140 ms 01:01:24.504 01:01:24.504 --- 10.0.0.3 ping statistics --- 01:01:24.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:24.504 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:01:24.504 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:01:24.504 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 01:01:24.504 01:01:24.504 --- 10.0.0.4 ping statistics --- 01:01:24.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:24.504 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:01:24.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:24.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 01:01:24.504 01:01:24.504 --- 10.0.0.1 ping statistics --- 01:01:24.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:24.504 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:01:24.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:24.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 01:01:24.504 01:01:24.504 --- 10.0.0.2 ping statistics --- 01:01:24.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:24.504 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:01:24.504 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=105344 01:01:24.505 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 01:01:24.505 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 105344 01:01:24.505 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 105344 ']' 01:01:24.505 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:24.505 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:24.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:24.505 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:24.505 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:24.505 10:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:01:24.505 [2024-12-09 10:12:31.500054] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:01:24.505 [2024-12-09 10:12:31.500951] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:01:24.505 [2024-12-09 10:12:31.501004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:01:24.765 [2024-12-09 10:12:31.636008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:24.765 [2024-12-09 10:12:31.684716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:01:24.765 [2024-12-09 10:12:31.684764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:01:24.765 [2024-12-09 10:12:31.684771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:01:24.765 [2024-12-09 10:12:31.684776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:01:24.765 [2024-12-09 10:12:31.684780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:01:24.765 [2024-12-09 10:12:31.685028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:01:24.765 [2024-12-09 10:12:31.755745] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:01:24.765 [2024-12-09 10:12:31.755967] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:01:25.734 [2024-12-09 10:12:32.489903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:01:25.734 Malloc0 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:01:25.734 [2024-12-09 10:12:32.569747] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=105394 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 105394 /var/tmp/bdevperf.sock 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 105394 ']' 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:25.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:25.734 10:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:01:25.734 [2024-12-09 10:12:32.616434] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:01:25.734 [2024-12-09 10:12:32.616506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105394 ] 01:01:25.734 [2024-12-09 10:12:32.754763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:26.009 [2024-12-09 10:12:32.825153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:26.578 10:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:26.578 10:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:01:26.578 10:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:01:26.578 10:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:26.578 10:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:01:26.578 NVMe0n1 01:01:26.578 10:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:26.578 10:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:01:26.838 Running I/O for 10 seconds... 01:01:28.713 10986.00 IOPS, 42.91 MiB/s [2024-12-09T10:12:36.697Z] 11245.50 IOPS, 43.93 MiB/s [2024-12-09T10:12:38.079Z] 11318.67 IOPS, 44.21 MiB/s [2024-12-09T10:12:39.017Z] 11424.75 IOPS, 44.63 MiB/s [2024-12-09T10:12:39.956Z] 11488.00 IOPS, 44.88 MiB/s [2024-12-09T10:12:40.895Z] 11561.00 IOPS, 45.16 MiB/s [2024-12-09T10:12:41.833Z] 11577.71 IOPS, 45.23 MiB/s [2024-12-09T10:12:42.770Z] 11454.88 IOPS, 44.75 MiB/s [2024-12-09T10:12:43.708Z] 11548.44 IOPS, 45.11 MiB/s [2024-12-09T10:12:43.968Z] 11598.50 IOPS, 45.31 MiB/s 01:01:36.924 Latency(us) 01:01:36.924 [2024-12-09T10:12:43.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:36.924 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 01:01:36.924 Verification LBA range: start 0x0 length 0x4000 01:01:36.924 NVMe0n1 : 10.06 11635.12 45.45 0.00 0.00 87719.97 15110.48 177662.66 01:01:36.924 [2024-12-09T10:12:43.968Z] =================================================================================================================== 01:01:36.924 [2024-12-09T10:12:43.968Z] Total : 11635.12 45.45 0.00 0.00 87719.97 15110.48 177662.66 01:01:36.924 { 01:01:36.924 "results": [ 01:01:36.924 { 01:01:36.924 "job": "NVMe0n1", 01:01:36.924 "core_mask": "0x1", 01:01:36.924 "workload": "verify", 01:01:36.924 "status": "finished", 01:01:36.924 "verify_range": { 01:01:36.924 "start": 0, 01:01:36.924 "length": 16384 01:01:36.924 }, 01:01:36.924 "queue_depth": 1024, 01:01:36.924 "io_size": 4096, 01:01:36.924 "runtime": 10.056532, 01:01:36.924 "iops": 11635.124315221192, 01:01:36.924 "mibps": 45.44970435633278, 01:01:36.924 "io_failed": 0, 01:01:36.924 "io_timeout": 0, 01:01:36.924 "avg_latency_us": 87719.96596501123, 01:01:36.925 "min_latency_us": 15110.48384279476, 01:01:36.925 "max_latency_us": 177662.65851528384 01:01:36.925 } 01:01:36.925 ], 01:01:36.925 "core_count": 1 01:01:36.925 } 01:01:36.925 10:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 105394 01:01:36.925 10:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 105394 ']' 01:01:36.925 10:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 105394 01:01:36.925 10:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:01:36.925 10:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:36.925 10:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105394 01:01:36.925 10:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:01:36.925 10:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:01:36.925 killing process with pid 105394 01:01:36.925 10:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105394' 01:01:36.925 10:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 105394 01:01:36.925 Received shutdown signal, test time was about 10.000000 seconds 01:01:36.925 01:01:36.925 Latency(us) 01:01:36.925 [2024-12-09T10:12:43.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:36.925 [2024-12-09T10:12:43.969Z] =================================================================================================================== 01:01:36.925 [2024-12-09T10:12:43.969Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:01:36.925 10:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 105394 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:01:37.185 rmmod nvme_tcp 01:01:37.185 rmmod nvme_fabrics 01:01:37.185 rmmod nvme_keyring 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 105344 ']' 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 105344 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 105344 ']' 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 105344 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:37.185 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105344 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:01:37.445 killing process with pid 105344 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105344' 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 105344 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 105344 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:01:37.445 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 01:01:37.705 01:01:37.705 real 0m13.970s 01:01:37.705 user 0m21.871s 01:01:37.705 sys 0m2.947s 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:37.705 ************************************ 01:01:37.705 END TEST nvmf_queue_depth 01:01:37.705 ************************************ 01:01:37.705 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:01:37.966 ************************************ 01:01:37.966 START TEST nvmf_target_multipath 01:01:37.966 ************************************ 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 01:01:37.966 * Looking for test storage... 01:01:37.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:37.966 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 01:01:37.967 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:37.967 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:37.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:37.967 --rc genhtml_branch_coverage=1 01:01:37.967 --rc genhtml_function_coverage=1 01:01:37.967 --rc genhtml_legend=1 01:01:37.967 --rc geninfo_all_blocks=1 01:01:37.967 --rc geninfo_unexecuted_blocks=1 01:01:37.967 01:01:37.967 ' 01:01:37.967 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:37.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:37.967 --rc genhtml_branch_coverage=1 01:01:37.967 --rc genhtml_function_coverage=1 01:01:37.967 --rc genhtml_legend=1 01:01:37.967 --rc geninfo_all_blocks=1 01:01:37.967 --rc geninfo_unexecuted_blocks=1 01:01:37.967 01:01:37.967 ' 01:01:37.967 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:37.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:37.967 --rc genhtml_branch_coverage=1 01:01:37.967 --rc genhtml_function_coverage=1 01:01:37.967 --rc genhtml_legend=1 01:01:37.967 --rc geninfo_all_blocks=1 01:01:37.967 --rc geninfo_unexecuted_blocks=1 01:01:37.967 01:01:37.967 ' 01:01:37.967 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:37.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:37.967 --rc genhtml_branch_coverage=1 01:01:37.967 --rc genhtml_function_coverage=1 01:01:37.967 --rc genhtml_legend=1 01:01:37.967 --rc geninfo_all_blocks=1 01:01:37.967 --rc geninfo_unexecuted_blocks=1 01:01:37.967 01:01:37.967 ' 01:01:37.967 10:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:01:37.967 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:01:38.226 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:01:38.227 Cannot find device "nvmf_init_br" 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:01:38.227 Cannot find device "nvmf_init_br2" 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:01:38.227 Cannot find device "nvmf_tgt_br" 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:01:38.227 Cannot find device "nvmf_tgt_br2" 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:01:38.227 Cannot find device "nvmf_init_br" 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:01:38.227 Cannot find device "nvmf_init_br2" 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:01:38.227 Cannot find device "nvmf_tgt_br" 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:01:38.227 Cannot find device "nvmf_tgt_br2" 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:01:38.227 Cannot find device "nvmf_br" 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:01:38.227 Cannot find device "nvmf_init_if" 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:01:38.227 Cannot find device "nvmf_init_if2" 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:38.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:38.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:01:38.227 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:01:38.486 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:01:38.487 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:01:38.487 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 01:01:38.487 01:01:38.487 --- 10.0.0.3 ping statistics --- 01:01:38.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:38.487 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:01:38.487 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:01:38.487 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.134 ms 01:01:38.487 01:01:38.487 --- 10.0.0.4 ping statistics --- 01:01:38.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:38.487 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:01:38.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:38.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 01:01:38.487 01:01:38.487 --- 10.0.0.1 ping statistics --- 01:01:38.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:38.487 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:01:38.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:38.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 01:01:38.487 01:01:38.487 --- 10.0.0.2 ping statistics --- 01:01:38.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:38.487 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:01:38.487 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=105782 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 105782 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 105782 ']' 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:38.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:38.747 10:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:01:38.747 [2024-12-09 10:12:45.598715] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:01:38.747 [2024-12-09 10:12:45.599599] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:01:38.747 [2024-12-09 10:12:45.599643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:01:38.747 [2024-12-09 10:12:45.750647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:01:39.006 [2024-12-09 10:12:45.817163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:01:39.006 [2024-12-09 10:12:45.817209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:01:39.006 [2024-12-09 10:12:45.817216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:01:39.006 [2024-12-09 10:12:45.817221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:01:39.006 [2024-12-09 10:12:45.817225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:01:39.006 [2024-12-09 10:12:45.818450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:01:39.006 [2024-12-09 10:12:45.818556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:01:39.006 [2024-12-09 10:12:45.818658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:39.006 [2024-12-09 10:12:45.818659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:01:39.006 [2024-12-09 10:12:45.946727] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:01:39.006 [2024-12-09 10:12:45.947505] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:01:39.006 [2024-12-09 10:12:45.947956] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:01:39.006 [2024-12-09 10:12:45.948747] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:01:39.006 [2024-12-09 10:12:45.948756] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:01:39.575 10:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:39.575 10:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 01:01:39.575 10:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:01:39.575 10:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:39.575 10:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:01:39.575 10:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:39.575 10:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:01:39.835 [2024-12-09 10:12:46.696153] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:01:39.835 10:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:01:40.094 Malloc0 01:01:40.094 10:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 01:01:40.354 10:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:01:40.614 10:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:01:40.614 [2024-12-09 10:12:47.595998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:01:40.614 10:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 01:01:40.873 [2024-12-09 10:12:47.815969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 01:01:40.873 10:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 01:01:41.133 10:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 01:01:41.133 10:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 01:01:41.133 10:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 01:01:41.133 10:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:01:41.133 10:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:01:41.133 10:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=105917 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:01:43.672 10:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 01:01:43.672 [global] 01:01:43.672 thread=1 01:01:43.672 invalidate=1 01:01:43.672 rw=randrw 01:01:43.672 time_based=1 01:01:43.672 runtime=6 01:01:43.672 ioengine=libaio 01:01:43.672 direct=1 01:01:43.672 bs=4096 01:01:43.672 iodepth=128 01:01:43.672 norandommap=0 01:01:43.672 numjobs=1 01:01:43.672 01:01:43.672 verify_dump=1 01:01:43.672 verify_backlog=512 01:01:43.672 verify_state_save=0 01:01:43.672 do_verify=1 01:01:43.672 verify=crc32c-intel 01:01:43.672 [job0] 01:01:43.672 filename=/dev/nvme0n1 01:01:43.672 Could not set queue depth (nvme0n1) 01:01:43.672 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:01:43.672 fio-3.35 01:01:43.672 Starting 1 thread 01:01:44.241 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:01:44.502 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:01:44.764 10:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:01:45.722 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:01:45.722 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:01:45.722 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:01:45.722 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:01:45.995 10:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:01:45.995 10:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:01:47.373 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:01:47.373 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:01:47.373 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:01:47.373 10:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 105917 01:01:49.906 01:01:49.906 job0: (groupid=0, jobs=1): err= 0: pid=105943: Mon Dec 9 10:12:56 2024 01:01:49.906 read: IOPS=14.1k, BW=55.1MiB/s (57.8MB/s)(331MiB/6004msec) 01:01:49.906 slat (nsec): min=1968, max=4848.7k, avg=39097.89, stdev=173216.74 01:01:49.906 clat (usec): min=576, max=17081, avg=6147.56, stdev=1243.60 01:01:49.906 lat (usec): min=608, max=17102, avg=6186.66, stdev=1250.04 01:01:49.906 clat percentiles (usec): 01:01:49.906 | 1.00th=[ 3785], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 5342], 01:01:49.906 | 30.00th=[ 5604], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6128], 01:01:49.906 | 70.00th=[ 6390], 80.00th=[ 6718], 90.00th=[ 7504], 95.00th=[ 8455], 01:01:49.906 | 99.00th=[10421], 99.50th=[11994], 99.90th=[15270], 99.95th=[15401], 01:01:49.906 | 99.99th=[16909] 01:01:49.906 bw ( KiB/s): min=14336, max=36848, per=50.67%, avg=28590.55, stdev=8103.03, samples=11 01:01:49.906 iops : min= 3584, max= 9212, avg=7147.64, stdev=2025.76, samples=11 01:01:49.906 write: IOPS=8392, BW=32.8MiB/s (34.4MB/s)(170MiB/5188msec); 0 zone resets 01:01:49.906 slat (usec): min=6, max=1800, avg=50.85, stdev=96.30 01:01:49.906 clat (usec): min=278, max=16093, avg=5521.95, stdev=1133.24 01:01:49.906 lat (usec): min=398, max=16116, avg=5572.80, stdev=1135.77 01:01:49.906 clat percentiles (usec): 01:01:49.906 | 1.00th=[ 3228], 5.00th=[ 4113], 10.00th=[ 4490], 20.00th=[ 4883], 01:01:49.906 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5407], 60.00th=[ 5604], 01:01:49.906 | 70.00th=[ 5735], 80.00th=[ 5932], 90.00th=[ 6390], 95.00th=[ 7177], 01:01:49.906 | 99.00th=[ 9896], 99.50th=[11600], 99.90th=[14877], 99.95th=[15664], 01:01:49.906 | 99.99th=[16057] 01:01:49.906 bw ( KiB/s): min=14984, max=37712, per=85.32%, avg=28641.45, stdev=7706.69, samples=11 01:01:49.906 iops : min= 3746, max= 9428, avg=7160.36, stdev=1926.67, samples=11 01:01:49.906 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 01:01:49.906 lat (msec) : 2=0.06%, 4=2.35%, 10=96.38%, 20=1.19% 01:01:49.906 cpu : usr=5.21%, sys=29.23%, ctx=11396, majf=0, minf=108 01:01:49.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:01:49.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:49.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:01:49.906 issued rwts: total=84693,43541,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:49.906 latency : target=0, window=0, percentile=100.00%, depth=128 01:01:49.906 01:01:49.906 Run status group 0 (all jobs): 01:01:49.906 READ: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=331MiB (347MB), run=6004-6004msec 01:01:49.906 WRITE: bw=32.8MiB/s (34.4MB/s), 32.8MiB/s-32.8MiB/s (34.4MB/s-34.4MB/s), io=170MiB (178MB), run=5188-5188msec 01:01:49.906 01:01:49.906 Disk stats (read/write): 01:01:49.906 nvme0n1: ios=83640/42701, merge=0/0, ticks=465359/216053, in_queue=681412, util=98.51% 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 01:01:49.906 10:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:01:51.286 10:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:01:51.286 10:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:01:51.286 10:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:01:51.286 10:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 01:01:51.286 10:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=106076 01:01:51.286 10:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:01:51.286 10:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 01:01:51.286 [global] 01:01:51.286 thread=1 01:01:51.286 invalidate=1 01:01:51.286 rw=randrw 01:01:51.286 time_based=1 01:01:51.286 runtime=6 01:01:51.286 ioengine=libaio 01:01:51.286 direct=1 01:01:51.286 bs=4096 01:01:51.286 iodepth=128 01:01:51.286 norandommap=0 01:01:51.286 numjobs=1 01:01:51.286 01:01:51.286 verify_dump=1 01:01:51.286 verify_backlog=512 01:01:51.286 verify_state_save=0 01:01:51.286 do_verify=1 01:01:51.286 verify=crc32c-intel 01:01:51.286 [job0] 01:01:51.286 filename=/dev/nvme0n1 01:01:51.286 Could not set queue depth (nvme0n1) 01:01:51.286 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:01:51.286 fio-3.35 01:01:51.286 Starting 1 thread 01:01:52.224 10:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:01:52.224 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:01:52.483 10:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:01:53.421 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:01:53.421 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:01:53.421 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:01:53.421 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:01:53.683 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:01:53.942 10:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:01:54.879 10:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:01:54.879 10:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:01:54.879 10:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:01:54.879 10:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 106076 01:01:57.416 01:01:57.416 job0: (groupid=0, jobs=1): err= 0: pid=106097: Mon Dec 9 10:13:04 2024 01:01:57.416 read: IOPS=13.4k, BW=52.4MiB/s (54.9MB/s)(315MiB/6004msec) 01:01:57.416 slat (nsec): min=1830, max=4370.9k, avg=36299.91, stdev=160176.23 01:01:57.416 clat (usec): min=248, max=23587, avg=6421.02, stdev=2911.31 01:01:57.416 lat (usec): min=267, max=23596, avg=6457.32, stdev=2913.59 01:01:57.416 clat percentiles (usec): 01:01:57.416 | 1.00th=[ 635], 5.00th=[ 2245], 10.00th=[ 4621], 20.00th=[ 5276], 01:01:57.416 | 30.00th=[ 5538], 40.00th=[ 5735], 50.00th=[ 5932], 60.00th=[ 6194], 01:01:57.416 | 70.00th=[ 6456], 80.00th=[ 6915], 90.00th=[ 8291], 95.00th=[13698], 01:01:57.416 | 99.00th=[17957], 99.50th=[18744], 99.90th=[20317], 99.95th=[20841], 01:01:57.416 | 99.99th=[22152] 01:01:57.416 bw ( KiB/s): min=14312, max=37976, per=52.75%, avg=28294.55, stdev=7855.60, samples=11 01:01:57.416 iops : min= 3578, max= 9494, avg=7073.64, stdev=1963.90, samples=11 01:01:57.416 write: IOPS=8037, BW=31.4MiB/s (32.9MB/s)(161MiB/5127msec); 0 zone resets 01:01:57.416 slat (usec): min=12, max=2179, avg=49.84, stdev=85.00 01:01:57.416 clat (usec): min=180, max=20852, avg=5913.56, stdev=3311.84 01:01:57.416 lat (usec): min=212, max=20885, avg=5963.40, stdev=3312.34 01:01:57.416 clat percentiles (usec): 01:01:57.416 | 1.00th=[ 429], 5.00th=[ 799], 10.00th=[ 3556], 20.00th=[ 4555], 01:01:57.416 | 30.00th=[ 5014], 40.00th=[ 5211], 50.00th=[ 5407], 60.00th=[ 5604], 01:01:57.416 | 70.00th=[ 5800], 80.00th=[ 6128], 90.00th=[ 8848], 95.00th=[15139], 01:01:57.416 | 99.00th=[16712], 99.50th=[17171], 99.90th=[19268], 99.95th=[19792], 01:01:57.416 | 99.99th=[20579] 01:01:57.416 bw ( KiB/s): min=15104, max=37144, per=88.04%, avg=28304.73, stdev=7341.96, samples=11 01:01:57.416 iops : min= 3776, max= 9286, avg=7076.18, stdev=1835.49, samples=11 01:01:57.416 lat (usec) : 250=0.02%, 500=0.86%, 750=1.70%, 1000=1.49% 01:01:57.416 lat (msec) : 2=1.86%, 4=2.97%, 10=83.49%, 20=7.52%, 50=0.09% 01:01:57.416 cpu : usr=5.85%, sys=29.95%, ctx=14226, majf=0, minf=127 01:01:57.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 01:01:57.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:57.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:01:57.416 issued rwts: total=80516,41208,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:57.416 latency : target=0, window=0, percentile=100.00%, depth=128 01:01:57.416 01:01:57.416 Run status group 0 (all jobs): 01:01:57.416 READ: bw=52.4MiB/s (54.9MB/s), 52.4MiB/s-52.4MiB/s (54.9MB/s-54.9MB/s), io=315MiB (330MB), run=6004-6004msec 01:01:57.416 WRITE: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=161MiB (169MB), run=5127-5127msec 01:01:57.416 01:01:57.416 Disk stats (read/write): 01:01:57.416 nvme0n1: ios=79836/40126, merge=0/0, ticks=459455/215575, in_queue=675030, util=98.72% 01:01:57.416 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:01:57.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:01:57.416 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:01:57.416 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 01:01:57.416 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:01:57.416 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:01:57.416 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:01:57.416 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:01:57.416 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 01:01:57.416 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:01:57.676 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 01:01:57.676 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 01:01:57.676 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 01:01:57.676 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 01:01:57.676 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 01:01:57.676 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 01:01:57.935 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:01:57.935 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 01:01:57.935 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:01:57.936 rmmod nvme_tcp 01:01:57.936 rmmod nvme_fabrics 01:01:57.936 rmmod nvme_keyring 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 105782 ']' 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 105782 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 105782 ']' 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 105782 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105782 01:01:57.936 killing process with pid 105782 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105782' 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 105782 01:01:57.936 10:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 105782 01:01:58.198 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:01:58.198 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:01:58.198 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:01:58.198 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 01:01:58.198 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:01:58.198 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 01:01:58.198 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 01:01:58.198 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:01:58.198 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:01:58.198 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:01:58.198 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:01:58.199 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:01:58.199 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:01:58.199 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 01:01:58.460 01:01:58.460 real 0m20.671s 01:01:58.460 user 1m10.702s 01:01:58.460 sys 0m8.554s 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:58.460 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:01:58.460 ************************************ 01:01:58.460 END TEST nvmf_target_multipath 01:01:58.460 ************************************ 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:01:58.721 ************************************ 01:01:58.721 START TEST nvmf_zcopy 01:01:58.721 ************************************ 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 01:01:58.721 * Looking for test storage... 01:01:58.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:58.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:58.721 --rc genhtml_branch_coverage=1 01:01:58.721 --rc genhtml_function_coverage=1 01:01:58.721 --rc genhtml_legend=1 01:01:58.721 --rc geninfo_all_blocks=1 01:01:58.721 --rc geninfo_unexecuted_blocks=1 01:01:58.721 01:01:58.721 ' 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:58.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:58.721 --rc genhtml_branch_coverage=1 01:01:58.721 --rc genhtml_function_coverage=1 01:01:58.721 --rc genhtml_legend=1 01:01:58.721 --rc geninfo_all_blocks=1 01:01:58.721 --rc geninfo_unexecuted_blocks=1 01:01:58.721 01:01:58.721 ' 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:58.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:58.721 --rc genhtml_branch_coverage=1 01:01:58.721 --rc genhtml_function_coverage=1 01:01:58.721 --rc genhtml_legend=1 01:01:58.721 --rc geninfo_all_blocks=1 01:01:58.721 --rc geninfo_unexecuted_blocks=1 01:01:58.721 01:01:58.721 ' 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:58.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:58.721 --rc genhtml_branch_coverage=1 01:01:58.721 --rc genhtml_function_coverage=1 01:01:58.721 --rc genhtml_legend=1 01:01:58.721 --rc geninfo_all_blocks=1 01:01:58.721 --rc geninfo_unexecuted_blocks=1 01:01:58.721 01:01:58.721 ' 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:58.721 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:01:58.982 Cannot find device "nvmf_init_br" 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:01:58.982 Cannot find device "nvmf_init_br2" 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:01:58.982 Cannot find device "nvmf_tgt_br" 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:01:58.982 Cannot find device "nvmf_tgt_br2" 01:01:58.982 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:01:58.983 Cannot find device "nvmf_init_br" 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:01:58.983 Cannot find device "nvmf_init_br2" 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:01:58.983 Cannot find device "nvmf_tgt_br" 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:01:58.983 Cannot find device "nvmf_tgt_br2" 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:01:58.983 Cannot find device "nvmf_br" 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:01:58.983 Cannot find device "nvmf_init_if" 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:01:58.983 Cannot find device "nvmf_init_if2" 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:58.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 01:01:58.983 10:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:58.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:58.983 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 01:01:58.983 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:01:58.983 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:01:58.983 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:01:59.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:01:59.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 01:01:59.243 01:01:59.243 --- 10.0.0.3 ping statistics --- 01:01:59.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:59.243 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:01:59.243 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:01:59.243 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 01:01:59.243 01:01:59.243 --- 10.0.0.4 ping statistics --- 01:01:59.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:59.243 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:01:59.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:59.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 01:01:59.243 01:01:59.243 --- 10.0.0.1 ping statistics --- 01:01:59.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:59.243 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:01:59.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:59.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 01:01:59.243 01:01:59.243 --- 10.0.0.2 ping statistics --- 01:01:59.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:59.243 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:01:59.243 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=106436 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 106436 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 106436 ']' 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:59.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:59.503 10:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:01:59.503 [2024-12-09 10:13:06.366225] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:01:59.503 [2024-12-09 10:13:06.367099] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:01:59.503 [2024-12-09 10:13:06.367153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:01:59.503 [2024-12-09 10:13:06.515559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:59.763 [2024-12-09 10:13:06.569445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:01:59.763 [2024-12-09 10:13:06.569497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:01:59.763 [2024-12-09 10:13:06.569503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:01:59.763 [2024-12-09 10:13:06.569508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:01:59.763 [2024-12-09 10:13:06.569514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:01:59.763 [2024-12-09 10:13:06.569775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:01:59.763 [2024-12-09 10:13:06.643415] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:01:59.763 [2024-12-09 10:13:06.643700] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:02:00.333 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:00.333 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 01:02:00.333 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:00.333 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:00.333 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:02:00.333 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:00.333 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 01:02:00.333 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 01:02:00.333 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:02:00.334 [2024-12-09 10:13:07.326651] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:02:00.334 [2024-12-09 10:13:07.347958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:00.334 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:02:00.607 malloc0 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:02:00.607 { 01:02:00.607 "params": { 01:02:00.607 "name": "Nvme$subsystem", 01:02:00.607 "trtype": "$TEST_TRANSPORT", 01:02:00.607 "traddr": "$NVMF_FIRST_TARGET_IP", 01:02:00.607 "adrfam": "ipv4", 01:02:00.607 "trsvcid": "$NVMF_PORT", 01:02:00.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:02:00.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:02:00.607 "hdgst": ${hdgst:-false}, 01:02:00.607 "ddgst": ${ddgst:-false} 01:02:00.607 }, 01:02:00.607 "method": "bdev_nvme_attach_controller" 01:02:00.607 } 01:02:00.607 EOF 01:02:00.607 )") 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:02:00.607 10:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:02:00.607 "params": { 01:02:00.607 "name": "Nvme1", 01:02:00.607 "trtype": "tcp", 01:02:00.607 "traddr": "10.0.0.3", 01:02:00.607 "adrfam": "ipv4", 01:02:00.607 "trsvcid": "4420", 01:02:00.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:00.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:00.607 "hdgst": false, 01:02:00.607 "ddgst": false 01:02:00.607 }, 01:02:00.607 "method": "bdev_nvme_attach_controller" 01:02:00.607 }' 01:02:00.607 [2024-12-09 10:13:07.451410] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:02:00.607 [2024-12-09 10:13:07.451489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106486 ] 01:02:00.607 [2024-12-09 10:13:07.601629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:00.880 [2024-12-09 10:13:07.674325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:00.880 Running I/O for 10 seconds... 01:02:03.195 8297.00 IOPS, 64.82 MiB/s [2024-12-09T10:13:11.178Z] 8232.00 IOPS, 64.31 MiB/s [2024-12-09T10:13:12.118Z] 8225.33 IOPS, 64.26 MiB/s [2024-12-09T10:13:13.056Z] 8306.00 IOPS, 64.89 MiB/s [2024-12-09T10:13:13.995Z] 8313.80 IOPS, 64.95 MiB/s [2024-12-09T10:13:14.935Z] 8312.67 IOPS, 64.94 MiB/s [2024-12-09T10:13:16.314Z] 8303.71 IOPS, 64.87 MiB/s [2024-12-09T10:13:16.881Z] 8290.38 IOPS, 64.77 MiB/s [2024-12-09T10:13:18.258Z] 8297.78 IOPS, 64.83 MiB/s [2024-12-09T10:13:18.258Z] 8297.30 IOPS, 64.82 MiB/s 01:02:11.214 Latency(us) 01:02:11.214 [2024-12-09T10:13:18.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:11.214 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 01:02:11.214 Verification LBA range: start 0x0 length 0x1000 01:02:11.214 Nvme1n1 : 10.01 8298.50 64.83 0.00 0.00 15380.56 1366.53 20605.21 01:02:11.214 [2024-12-09T10:13:18.258Z] =================================================================================================================== 01:02:11.214 [2024-12-09T10:13:18.258Z] Total : 8298.50 64.83 0.00 0.00 15380.56 1366.53 20605.21 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=106614 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:02:11.214 { 01:02:11.214 "params": { 01:02:11.214 "name": "Nvme$subsystem", 01:02:11.214 "trtype": "$TEST_TRANSPORT", 01:02:11.214 "traddr": "$NVMF_FIRST_TARGET_IP", 01:02:11.214 "adrfam": "ipv4", 01:02:11.214 "trsvcid": "$NVMF_PORT", 01:02:11.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:02:11.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:02:11.214 "hdgst": ${hdgst:-false}, 01:02:11.214 "ddgst": ${ddgst:-false} 01:02:11.214 }, 01:02:11.214 "method": "bdev_nvme_attach_controller" 01:02:11.214 } 01:02:11.214 EOF 01:02:11.214 )") 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:02:11.214 [2024-12-09 10:13:18.162184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.214 [2024-12-09 10:13:18.162214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:02:11.214 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:02:11.214 10:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:02:11.214 "params": { 01:02:11.214 "name": "Nvme1", 01:02:11.214 "trtype": "tcp", 01:02:11.214 "traddr": "10.0.0.3", 01:02:11.214 "adrfam": "ipv4", 01:02:11.214 "trsvcid": "4420", 01:02:11.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:11.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:11.214 "hdgst": false, 01:02:11.214 "ddgst": false 01:02:11.214 }, 01:02:11.214 "method": "bdev_nvme_attach_controller" 01:02:11.214 }' 01:02:11.214 [2024-12-09 10:13:18.170379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.214 [2024-12-09 10:13:18.170404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.214 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.214 [2024-12-09 10:13:18.178148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.214 [2024-12-09 10:13:18.178171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.214 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.214 [2024-12-09 10:13:18.190148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.214 [2024-12-09 10:13:18.190169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.214 [2024-12-09 10:13:18.191790] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:02:11.214 [2024-12-09 10:13:18.192235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106614 ] 01:02:11.214 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.214 [2024-12-09 10:13:18.202140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.214 [2024-12-09 10:13:18.202157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.214 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.214 [2024-12-09 10:13:18.214139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.214 [2024-12-09 10:13:18.214154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.214 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.214 [2024-12-09 10:13:18.226137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.214 [2024-12-09 10:13:18.226152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.214 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.214 [2024-12-09 10:13:18.238137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.214 [2024-12-09 10:13:18.238153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.214 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.214 [2024-12-09 10:13:18.250137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.214 [2024-12-09 10:13:18.250153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.214 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.474 [2024-12-09 10:13:18.262155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.474 [2024-12-09 10:13:18.262176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.274138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.274153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.286138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.286154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.302135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.302150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.314137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.314153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.326166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.326187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.338139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.338156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 [2024-12-09 10:13:18.341046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.350138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.350156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.362153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.362171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.374137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.374153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.386154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.386172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.398138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.398154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.410136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.410152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.418348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:11.475 [2024-12-09 10:13:18.422166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.422183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.434140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.434156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.446162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.446182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.458163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.458181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.470164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.470183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.482162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.482181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.494164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.494184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.475 [2024-12-09 10:13:18.506163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.475 [2024-12-09 10:13:18.506182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.475 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.518148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.518182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.530163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.530182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.542162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.542180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.554138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.554159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.566161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.566191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.578146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.578169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.590171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.590198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.602168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.602196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.614143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.614166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.626158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.626186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 Running I/O for 5 seconds... 01:02:11.735 [2024-12-09 10:13:18.643243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.643273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.659318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.659347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.674559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.674587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.690013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.690042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.701714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.701741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.714919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.714948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.730824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.730853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.746135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.746163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.758886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.758922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.735 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.735 [2024-12-09 10:13:18.774529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.735 [2024-12-09 10:13:18.774577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.789809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.789859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.801458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.801517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.816282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.816335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.831256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.831295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.846404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.846438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.862462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.862501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.877929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.877970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.893635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.893667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.910502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.910534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.926579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.926611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.942739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.942794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.958565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.958598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.974167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.974198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:18.988157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:18.988190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:19.003551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:19.003580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:19.019385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:19.019414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:11.996 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:11.996 [2024-12-09 10:13:19.035141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:11.996 [2024-12-09 10:13:19.035170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.050589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.050617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.066044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.066078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.078106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.078133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.091865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.091895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.107684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.107715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.122901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.122932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.138366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.138395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.150417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.150445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.166156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.166186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.178841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.178869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.194611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.194641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.210026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.210056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.221674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.221702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.236496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.236526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.251424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.251455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.266771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.266802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.282739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.282769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.257 [2024-12-09 10:13:19.298179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.257 [2024-12-09 10:13:19.298209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.257 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.517 [2024-12-09 10:13:19.312352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.517 [2024-12-09 10:13:19.312382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.517 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.517 [2024-12-09 10:13:19.327922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.517 [2024-12-09 10:13:19.327954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.517 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.517 [2024-12-09 10:13:19.343404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.517 [2024-12-09 10:13:19.343434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.517 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.517 [2024-12-09 10:13:19.359133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.517 [2024-12-09 10:13:19.359164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.517 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.517 [2024-12-09 10:13:19.374617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.517 [2024-12-09 10:13:19.374648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.517 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.517 [2024-12-09 10:13:19.390143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.518 [2024-12-09 10:13:19.390173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.518 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.518 [2024-12-09 10:13:19.402764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.518 [2024-12-09 10:13:19.402792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.518 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.518 [2024-12-09 10:13:19.418021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.518 [2024-12-09 10:13:19.418051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.518 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.518 [2024-12-09 10:13:19.430560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.518 [2024-12-09 10:13:19.430589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.518 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.518 [2024-12-09 10:13:19.446208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.518 [2024-12-09 10:13:19.446238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.518 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.518 [2024-12-09 10:13:19.458131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.518 [2024-12-09 10:13:19.458164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.518 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.518 [2024-12-09 10:13:19.472380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.518 [2024-12-09 10:13:19.472414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.518 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.518 [2024-12-09 10:13:19.488283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.518 [2024-12-09 10:13:19.488317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.518 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.518 [2024-12-09 10:13:19.503949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.518 [2024-12-09 10:13:19.504089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.518 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.518 [2024-12-09 10:13:19.520015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.518 [2024-12-09 10:13:19.520052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.518 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.518 [2024-12-09 10:13:19.535231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.518 [2024-12-09 10:13:19.535266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.518 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.518 [2024-12-09 10:13:19.550550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.518 [2024-12-09 10:13:19.550646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.518 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.778 [2024-12-09 10:13:19.566729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.778 [2024-12-09 10:13:19.566759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.778 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.778 [2024-12-09 10:13:19.582371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.778 [2024-12-09 10:13:19.582403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.778 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.778 [2024-12-09 10:13:19.594157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.778 [2024-12-09 10:13:19.594185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.778 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.778 [2024-12-09 10:13:19.608061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.778 [2024-12-09 10:13:19.608134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.778 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.778 [2024-12-09 10:13:19.623557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.778 [2024-12-09 10:13:19.623635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.778 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.778 15954.00 IOPS, 124.64 MiB/s [2024-12-09T10:13:19.822Z] [2024-12-09 10:13:19.638762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.778 [2024-12-09 10:13:19.638834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.778 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.778 [2024-12-09 10:13:19.654137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.779 [2024-12-09 10:13:19.654209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.779 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.779 [2024-12-09 10:13:19.668128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.779 [2024-12-09 10:13:19.668198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.779 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.779 [2024-12-09 10:13:19.683400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.779 [2024-12-09 10:13:19.683483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.779 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.779 [2024-12-09 10:13:19.698774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.779 [2024-12-09 10:13:19.698845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.779 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.779 [2024-12-09 10:13:19.713748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.779 [2024-12-09 10:13:19.713817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.779 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.779 [2024-12-09 10:13:19.730095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.779 [2024-12-09 10:13:19.730165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.779 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.779 [2024-12-09 10:13:19.743021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.779 [2024-12-09 10:13:19.743092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.779 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.779 [2024-12-09 10:13:19.758471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.779 [2024-12-09 10:13:19.758551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.779 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.779 [2024-12-09 10:13:19.774971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.779 [2024-12-09 10:13:19.775001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.779 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.779 [2024-12-09 10:13:19.790606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.779 [2024-12-09 10:13:19.790637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.779 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.779 [2024-12-09 10:13:19.806162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.779 [2024-12-09 10:13:19.806235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:12.779 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:12.779 [2024-12-09 10:13:19.819632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:12.779 [2024-12-09 10:13:19.819677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:19.834993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:19.835022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:19.850883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:19.850957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:19.867033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:19.867105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:19.883171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:19.883246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:19.899637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:19.899725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:19.915998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:19.916031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:19.932294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:19.932373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:19.947815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:19.947888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:19.964404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:19.964484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:19.980394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:19.980468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:19.994191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:19.994266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:20.008055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:20.008086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.039 [2024-12-09 10:13:20.024393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.039 [2024-12-09 10:13:20.024426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.039 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.040 [2024-12-09 10:13:20.046059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.040 [2024-12-09 10:13:20.046154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.040 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.040 [2024-12-09 10:13:20.059761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.040 [2024-12-09 10:13:20.059833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.040 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.040 [2024-12-09 10:13:20.075430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.040 [2024-12-09 10:13:20.075524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.040 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.091099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.091169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.106059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.106091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.120061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.120091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.135292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.135367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.150426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.150507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.165799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.165868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.181653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.181721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.196523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.196594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.210878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.210948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.226526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.226591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.242070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.242144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.253571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.253599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.268196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.268265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.283524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.283589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.298459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.300 [2024-12-09 10:13:20.298539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.300 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.300 [2024-12-09 10:13:20.314174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.301 [2024-12-09 10:13:20.314243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.301 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.301 [2024-12-09 10:13:20.326698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.301 [2024-12-09 10:13:20.326765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.301 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.301 [2024-12-09 10:13:20.342249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.301 [2024-12-09 10:13:20.342317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.301 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.354896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.354963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.370668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.370697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.386240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.386309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.398818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.398884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.414133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.414201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.427951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.428016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.442746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.442814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.458071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.458139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.472176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.472248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.487516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.487586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.502977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.503047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.518661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.518732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.534790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.534861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.550273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.550344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.562373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.562446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.578585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.578659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.561 [2024-12-09 10:13:20.594312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.561 [2024-12-09 10:13:20.594383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.561 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.821 [2024-12-09 10:13:20.606551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.821 [2024-12-09 10:13:20.606621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.821 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.821 [2024-12-09 10:13:20.622302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.821 [2024-12-09 10:13:20.622377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.821 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.821 15965.50 IOPS, 124.73 MiB/s [2024-12-09T10:13:20.865Z] [2024-12-09 10:13:20.633782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.821 [2024-12-09 10:13:20.633855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.821 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.821 [2024-12-09 10:13:20.648407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.821 [2024-12-09 10:13:20.648500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.821 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.821 [2024-12-09 10:13:20.663891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.663967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.679412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.679499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.694905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.694973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.710224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.710295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.721653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.721683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.736091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.736121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.751274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.751345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.766440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.766527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.782509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.782580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.798337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.798406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.810070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.810140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.824555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.824626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.839848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.839917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:13.822 [2024-12-09 10:13:20.854946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:13.822 [2024-12-09 10:13:20.855017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:13.822 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.081 [2024-12-09 10:13:20.870151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.081 [2024-12-09 10:13:20.870219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.081 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.081 [2024-12-09 10:13:20.881329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.081 [2024-12-09 10:13:20.881401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:20.895834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:20.895904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:20.911177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:20.911245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:20.926231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:20.926301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:20.937914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:20.937984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:20.952335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:20.952405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:20.967482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:20.967512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:20.982667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:20.982695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:20.998733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:20.998801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:21.014449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:21.014533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:21.029983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:21.030055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:21.044061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:21.044136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:21.059780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:21.059852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:21.075521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:21.075590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:21.090926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:21.090994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:21.106803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:21.106834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.082 [2024-12-09 10:13:21.122058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.082 [2024-12-09 10:13:21.122133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.082 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.341 [2024-12-09 10:13:21.132903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.341 [2024-12-09 10:13:21.132974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.341 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.341 [2024-12-09 10:13:21.147919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.341 [2024-12-09 10:13:21.147989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.162997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.163067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.178725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.178796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.193989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.194063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.208311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.208386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.223292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.223371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.239047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.239144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.254632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.254705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.270443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.270529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.286367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.286452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.298965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.298995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.314342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.314415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.325926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.326000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.339763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.339832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.355338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.355415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.342 [2024-12-09 10:13:21.371159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.342 [2024-12-09 10:13:21.371241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.342 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.386747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.386826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.401838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.401911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.416079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.416158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.431670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.431702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.447400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.447472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.462647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.462678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.478744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.478772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.494345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.494417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.506086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.506116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.520308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.520339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.536085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.536166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.551551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.551635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.567291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.567364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.582918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.582988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.598909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.598978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.614667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.614703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 15978.67 IOPS, 124.83 MiB/s [2024-12-09T10:13:21.646Z] [2024-12-09 10:13:21.630006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.630098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.602 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.602 [2024-12-09 10:13:21.642811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.602 [2024-12-09 10:13:21.642889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.658168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.658242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.670820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.670891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.686401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.686472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.702469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.702562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.719156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.719231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.735288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.735361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.751144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.751175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.767410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.767443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.782755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.782785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.799096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.799128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.814693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.814724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.830302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.830332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.862 [2024-12-09 10:13:21.843676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.862 [2024-12-09 10:13:21.843706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.862 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.863 [2024-12-09 10:13:21.859030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.863 [2024-12-09 10:13:21.859060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.863 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.863 [2024-12-09 10:13:21.874362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.863 [2024-12-09 10:13:21.874393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.863 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.863 [2024-12-09 10:13:21.887005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.863 [2024-12-09 10:13:21.887035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:14.863 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:14.863 [2024-12-09 10:13:21.902198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:14.863 [2024-12-09 10:13:21.902229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.122 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.122 [2024-12-09 10:13:21.914875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.122 [2024-12-09 10:13:21.914904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.122 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.122 [2024-12-09 10:13:21.930475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.122 [2024-12-09 10:13:21.930514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.122 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.122 [2024-12-09 10:13:21.946134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.122 [2024-12-09 10:13:21.946164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.122 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.122 [2024-12-09 10:13:21.960157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.122 [2024-12-09 10:13:21.960188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:21.975706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:21.975739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:21.991370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:21.991400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:22.006610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:22.006639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:22.022633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:22.022661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:22.037980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:22.038009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:22.054510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:22.054538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:22.070485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:22.070515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:22.086209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:22.086239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:22.100348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:22.100377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:22.115707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:22.115737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:22.130839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:22.130870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:22.146427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:22.146459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.123 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.123 [2024-12-09 10:13:22.162596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.123 [2024-12-09 10:13:22.162626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.178689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.178719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.194038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.194074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.208039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.208069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.223160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.223190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.238459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.238498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.254617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.254647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.270081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.270109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.282662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.282691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.298023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.298053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.312054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.312083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.327146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.327176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.342609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.342638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.358622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.358651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.374812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.374841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.390029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.390061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.401611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.401640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.383 [2024-12-09 10:13:22.416071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.383 [2024-12-09 10:13:22.416102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.383 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.431038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.431068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.446144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.446172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.457989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.458017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.472308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.472340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.487116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.487147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.502335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.502365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.513509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.513537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.527876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.527906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.543213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.543243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.558788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.558818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.574558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.574588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.590208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.590237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.602680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.602709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.618597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.618627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 16016.25 IOPS, 125.13 MiB/s [2024-12-09T10:13:22.687Z] [2024-12-09 10:13:22.634130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.634160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.643 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.643 [2024-12-09 10:13:22.646613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.643 [2024-12-09 10:13:22.646641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.644 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.644 [2024-12-09 10:13:22.662216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.644 [2024-12-09 10:13:22.662245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.644 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.644 [2024-12-09 10:13:22.675963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.644 [2024-12-09 10:13:22.676068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.644 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.691289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.691362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.707048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.707122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.722879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.722949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.738461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.738546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.753952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.754052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.768197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.768275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.783864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.783938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.799670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.799797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.815075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.815131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.830585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.830631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.846795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.846919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.862398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.862433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.878367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.878397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.890928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.890960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.906165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.906195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.917893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.917923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:15.904 [2024-12-09 10:13:22.932263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:15.904 [2024-12-09 10:13:22.932294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:15.904 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:22.948120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:22.948153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:22.963612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:22.963646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:22.979018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:22.979050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:22.994211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:22.994236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.008396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.008427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.023937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.023966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.039241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.039271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.054355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.054383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.067297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.067327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.082424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.082453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.098864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.098894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.114427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.114458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.130135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.130164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.142969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.142996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.157827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.157858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.172573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.172604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.187992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.188024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.166 [2024-12-09 10:13:23.203010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.166 [2024-12-09 10:13:23.203039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.166 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.218138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.218167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.229779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.229809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.244483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.244513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.260132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.260163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.275573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.275610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.290997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.291031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.306379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.306412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.319290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.319329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.334962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.334998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.350127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.350159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.361862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.361896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.376545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.376590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.391697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.391734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.407439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.407472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.422596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.422629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.438201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.438231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.451725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.451754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.427 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.427 [2024-12-09 10:13:23.466892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.427 [2024-12-09 10:13:23.466923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 [2024-12-09 10:13:23.482509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.482542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 [2024-12-09 10:13:23.498254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.498297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 [2024-12-09 10:13:23.510715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.510746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 [2024-12-09 10:13:23.526539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.526577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 [2024-12-09 10:13:23.542927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.542963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 [2024-12-09 10:13:23.558263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.558297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 [2024-12-09 10:13:23.570407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.570438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 [2024-12-09 10:13:23.586792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.586825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 [2024-12-09 10:13:23.602636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.602669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 [2024-12-09 10:13:23.618747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.618781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 16008.00 IOPS, 125.06 MiB/s [2024-12-09T10:13:23.732Z] [2024-12-09 10:13:23.631038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.631074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 01:02:16.688 Latency(us) 01:02:16.688 [2024-12-09T10:13:23.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:16.688 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 01:02:16.688 Nvme1n1 : 5.01 16009.96 125.08 0.00 0.00 7986.69 1974.67 14080.22 01:02:16.688 [2024-12-09T10:13:23.732Z] =================================================================================================================== 01:02:16.688 [2024-12-09T10:13:23.732Z] Total : 16009.96 125.08 0.00 0.00 7986.69 1974.67 14080.22 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 [2024-12-09 10:13:23.642149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.642176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.688 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.688 [2024-12-09 10:13:23.654143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.688 [2024-12-09 10:13:23.654166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.689 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.689 [2024-12-09 10:13:23.666137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.689 [2024-12-09 10:13:23.666156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.689 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.689 [2024-12-09 10:13:23.678137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.689 [2024-12-09 10:13:23.678159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.689 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.689 [2024-12-09 10:13:23.690140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.689 [2024-12-09 10:13:23.690210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.689 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.689 [2024-12-09 10:13:23.702138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.689 [2024-12-09 10:13:23.702194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.689 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.689 [2024-12-09 10:13:23.714141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.689 [2024-12-09 10:13:23.714195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.689 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.689 [2024-12-09 10:13:23.726145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.689 [2024-12-09 10:13:23.726206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.689 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.738144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.738207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.949 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.750144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.750204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.949 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.762140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.762199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.949 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.774139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.774201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.949 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.786140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.786199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.949 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.798141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.798203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.949 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.810146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.810209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.949 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.822144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.822169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.949 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.834143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.834208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.949 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.846156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.846223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.949 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.858147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.858204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.949 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.870140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.870191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.949 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.949 [2024-12-09 10:13:23.882139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.949 [2024-12-09 10:13:23.882189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.950 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.950 [2024-12-09 10:13:23.894140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.950 [2024-12-09 10:13:23.894195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.950 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.950 [2024-12-09 10:13:23.906138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.950 [2024-12-09 10:13:23.906190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.950 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.950 [2024-12-09 10:13:23.918149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:02:16.950 [2024-12-09 10:13:23.918216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:16.950 2024/12/09 10:13:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:16.950 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (106614) - No such process 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 106614 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:02:16.950 delay0 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:16.950 10:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 01:02:17.210 [2024-12-09 10:13:24.133408] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:02:25.371 Initializing NVMe Controllers 01:02:25.371 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:02:25.371 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:02:25.371 Initialization complete. Launching workers. 01:02:25.371 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 234, failed: 29451 01:02:25.371 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 29562, failed to submit 123 01:02:25.371 success 29485, unsuccessful 77, failed 0 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:02:25.371 rmmod nvme_tcp 01:02:25.371 rmmod nvme_fabrics 01:02:25.371 rmmod nvme_keyring 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 106436 ']' 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 106436 01:02:25.371 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 106436 ']' 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 106436 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106436 01:02:25.372 killing process with pid 106436 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106436' 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 106436 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 106436 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 01:02:25.372 01:02:25.372 real 0m26.293s 01:02:25.372 user 0m37.830s 01:02:25.372 sys 0m9.997s 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:02:25.372 ************************************ 01:02:25.372 END TEST nvmf_zcopy 01:02:25.372 ************************************ 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:02:25.372 ************************************ 01:02:25.372 START TEST nvmf_nmic 01:02:25.372 ************************************ 01:02:25.372 10:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 01:02:25.372 * Looking for test storage... 01:02:25.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:02:25.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:25.372 --rc genhtml_branch_coverage=1 01:02:25.372 --rc genhtml_function_coverage=1 01:02:25.372 --rc genhtml_legend=1 01:02:25.372 --rc geninfo_all_blocks=1 01:02:25.372 --rc geninfo_unexecuted_blocks=1 01:02:25.372 01:02:25.372 ' 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:02:25.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:25.372 --rc genhtml_branch_coverage=1 01:02:25.372 --rc genhtml_function_coverage=1 01:02:25.372 --rc genhtml_legend=1 01:02:25.372 --rc geninfo_all_blocks=1 01:02:25.372 --rc geninfo_unexecuted_blocks=1 01:02:25.372 01:02:25.372 ' 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:02:25.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:25.372 --rc genhtml_branch_coverage=1 01:02:25.372 --rc genhtml_function_coverage=1 01:02:25.372 --rc genhtml_legend=1 01:02:25.372 --rc geninfo_all_blocks=1 01:02:25.372 --rc geninfo_unexecuted_blocks=1 01:02:25.372 01:02:25.372 ' 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:02:25.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:25.372 --rc genhtml_branch_coverage=1 01:02:25.372 --rc genhtml_function_coverage=1 01:02:25.372 --rc genhtml_legend=1 01:02:25.372 --rc geninfo_all_blocks=1 01:02:25.372 --rc geninfo_unexecuted_blocks=1 01:02:25.372 01:02:25.372 ' 01:02:25.372 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:02:25.373 Cannot find device "nvmf_init_br" 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:02:25.373 Cannot find device "nvmf_init_br2" 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:02:25.373 Cannot find device "nvmf_tgt_br" 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:02:25.373 Cannot find device "nvmf_tgt_br2" 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:02:25.373 Cannot find device "nvmf_init_br" 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:02:25.373 Cannot find device "nvmf_init_br2" 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 01:02:25.373 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:02:25.373 Cannot find device "nvmf_tgt_br" 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:02:25.374 Cannot find device "nvmf_tgt_br2" 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:02:25.374 Cannot find device "nvmf_br" 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:02:25.374 Cannot find device "nvmf_init_if" 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:02:25.374 Cannot find device "nvmf_init_if2" 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:25.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:25.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:25.374 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:02:25.634 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:25.634 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.161 ms 01:02:25.634 01:02:25.634 --- 10.0.0.3 ping statistics --- 01:02:25.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:25.634 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:02:25.634 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:02:25.634 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 01:02:25.634 01:02:25.634 --- 10.0.0.4 ping statistics --- 01:02:25.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:25.634 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:25.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:25.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 01:02:25.634 01:02:25.634 --- 10.0.0.1 ping statistics --- 01:02:25.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:25.634 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:02:25.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:25.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 01:02:25.634 01:02:25.634 --- 10.0.0.2 ping statistics --- 01:02:25.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:25.634 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=107003 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 107003 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 107003 ']' 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:25.634 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:25.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:25.635 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:25.635 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:25.635 10:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:25.895 [2024-12-09 10:13:32.718430] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:02:25.895 [2024-12-09 10:13:32.719337] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:02:25.895 [2024-12-09 10:13:32.719388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:25.895 [2024-12-09 10:13:32.869687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:02:25.895 [2024-12-09 10:13:32.916885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:25.895 [2024-12-09 10:13:32.916970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:25.895 [2024-12-09 10:13:32.916976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:25.895 [2024-12-09 10:13:32.916981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:25.895 [2024-12-09 10:13:32.916986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:25.895 [2024-12-09 10:13:32.917909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:25.895 [2024-12-09 10:13:32.918026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:25.895 [2024-12-09 10:13:32.918118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:25.895 [2024-12-09 10:13:32.918124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:02:26.154 [2024-12-09 10:13:32.988944] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:02:26.154 [2024-12-09 10:13:32.990410] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:02:26.154 [2024-12-09 10:13:32.990517] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:02:26.154 [2024-12-09 10:13:32.990794] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:02:26.154 [2024-12-09 10:13:32.991026] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:26.724 [2024-12-09 10:13:33.651276] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.724 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:26.724 Malloc0 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:26.725 [2024-12-09 10:13:33.747447] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.725 test case1: single bdev can't be used in multiple subsystems 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.725 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:26.985 [2024-12-09 10:13:33.782949] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 01:02:26.985 [2024-12-09 10:13:33.782975] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 01:02:26.985 [2024-12-09 10:13:33.782983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:02:26.985 2024/12/09 10:13:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:02:26.985 request: 01:02:26.985 { 01:02:26.985 "method": "nvmf_subsystem_add_ns", 01:02:26.985 "params": { 01:02:26.985 "nqn": "nqn.2016-06.io.spdk:cnode2", 01:02:26.985 "namespace": { 01:02:26.985 "bdev_name": "Malloc0", 01:02:26.985 "no_auto_visible": false, 01:02:26.985 "hide_metadata": false 01:02:26.985 } 01:02:26.985 } 01:02:26.985 } 01:02:26.985 Got JSON-RPC error response 01:02:26.985 GoRPCClient: error on JSON-RPC call 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 01:02:26.985 Adding namespace failed - expected result. 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 01:02:26.985 test case2: host connect to nvmf target in multiple paths 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:26.985 [2024-12-09 10:13:33.799002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:02:26.985 10:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 01:02:29.523 10:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:02:29.523 10:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:02:29.523 10:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:02:29.523 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:02:29.523 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:02:29.523 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 01:02:29.523 10:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:02:29.523 [global] 01:02:29.523 thread=1 01:02:29.523 invalidate=1 01:02:29.523 rw=write 01:02:29.523 time_based=1 01:02:29.523 runtime=1 01:02:29.523 ioengine=libaio 01:02:29.523 direct=1 01:02:29.523 bs=4096 01:02:29.523 iodepth=1 01:02:29.523 norandommap=0 01:02:29.523 numjobs=1 01:02:29.523 01:02:29.523 verify_dump=1 01:02:29.523 verify_backlog=512 01:02:29.523 verify_state_save=0 01:02:29.523 do_verify=1 01:02:29.523 verify=crc32c-intel 01:02:29.523 [job0] 01:02:29.523 filename=/dev/nvme0n1 01:02:29.523 Could not set queue depth (nvme0n1) 01:02:29.524 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:29.524 fio-3.35 01:02:29.524 Starting 1 thread 01:02:30.463 01:02:30.463 job0: (groupid=0, jobs=1): err= 0: pid=107107: Mon Dec 9 10:13:37 2024 01:02:30.463 read: IOPS=3617, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec) 01:02:30.463 slat (nsec): min=6893, max=24826, avg=7473.67, stdev=935.47 01:02:30.463 clat (usec): min=115, max=436, avg=141.22, stdev= 9.89 01:02:30.463 lat (usec): min=122, max=455, avg=148.69, stdev=10.06 01:02:30.463 clat percentiles (usec): 01:02:30.463 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 135], 01:02:30.463 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 01:02:30.464 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 157], 01:02:30.464 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 180], 99.95th=[ 190], 01:02:30.464 | 99.99th=[ 437] 01:02:30.464 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 01:02:30.464 slat (usec): min=9, max=123, avg=13.32, stdev= 8.55 01:02:30.464 clat (usec): min=77, max=294, avg=97.67, stdev= 9.12 01:02:30.464 lat (usec): min=88, max=330, avg=110.99, stdev=15.24 01:02:30.464 clat percentiles (usec): 01:02:30.464 | 1.00th=[ 84], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 91], 01:02:30.464 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 98], 01:02:30.464 | 70.00th=[ 101], 80.00th=[ 104], 90.00th=[ 109], 95.00th=[ 114], 01:02:30.464 | 99.00th=[ 124], 99.50th=[ 133], 99.90th=[ 147], 99.95th=[ 151], 01:02:30.464 | 99.99th=[ 293] 01:02:30.464 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 01:02:30.464 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 01:02:30.464 lat (usec) : 100=36.05%, 250=63.92%, 500=0.03% 01:02:30.464 cpu : usr=1.20%, sys=6.10%, ctx=7717, majf=0, minf=5 01:02:30.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:30.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:30.464 issued rwts: total=3621,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:30.464 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:30.464 01:02:30.464 Run status group 0 (all jobs): 01:02:30.464 READ: bw=14.1MiB/s (14.8MB/s), 14.1MiB/s-14.1MiB/s (14.8MB/s-14.8MB/s), io=14.1MiB (14.8MB), run=1001-1001msec 01:02:30.464 WRITE: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 01:02:30.464 01:02:30.464 Disk stats (read/write): 01:02:30.464 nvme0n1: ios=3352/3584, merge=0/0, ticks=495/368, in_queue=863, util=91.28% 01:02:30.464 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:02:30.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:02:30.464 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:02:30.464 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 01:02:30.464 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:02:30.464 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:02:30.724 rmmod nvme_tcp 01:02:30.724 rmmod nvme_fabrics 01:02:30.724 rmmod nvme_keyring 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 107003 ']' 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 107003 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 107003 ']' 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 107003 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107003 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:02:30.724 killing process with pid 107003 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107003' 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 107003 01:02:30.724 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 107003 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:02:30.984 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:02:30.985 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:02:30.985 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:02:30.985 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:02:30.985 10:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:02:30.985 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 01:02:31.245 01:02:31.245 real 0m6.297s 01:02:31.245 user 0m16.276s 01:02:31.245 sys 0m1.708s 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:02:31.245 ************************************ 01:02:31.245 END TEST nvmf_nmic 01:02:31.245 ************************************ 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:02:31.245 ************************************ 01:02:31.245 START TEST nvmf_fio_target 01:02:31.245 ************************************ 01:02:31.245 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 01:02:31.505 * Looking for test storage... 01:02:31.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:02:31.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:31.506 --rc genhtml_branch_coverage=1 01:02:31.506 --rc genhtml_function_coverage=1 01:02:31.506 --rc genhtml_legend=1 01:02:31.506 --rc geninfo_all_blocks=1 01:02:31.506 --rc geninfo_unexecuted_blocks=1 01:02:31.506 01:02:31.506 ' 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:02:31.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:31.506 --rc genhtml_branch_coverage=1 01:02:31.506 --rc genhtml_function_coverage=1 01:02:31.506 --rc genhtml_legend=1 01:02:31.506 --rc geninfo_all_blocks=1 01:02:31.506 --rc geninfo_unexecuted_blocks=1 01:02:31.506 01:02:31.506 ' 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:02:31.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:31.506 --rc genhtml_branch_coverage=1 01:02:31.506 --rc genhtml_function_coverage=1 01:02:31.506 --rc genhtml_legend=1 01:02:31.506 --rc geninfo_all_blocks=1 01:02:31.506 --rc geninfo_unexecuted_blocks=1 01:02:31.506 01:02:31.506 ' 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:02:31.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:31.506 --rc genhtml_branch_coverage=1 01:02:31.506 --rc genhtml_function_coverage=1 01:02:31.506 --rc genhtml_legend=1 01:02:31.506 --rc geninfo_all_blocks=1 01:02:31.506 --rc geninfo_unexecuted_blocks=1 01:02:31.506 01:02:31.506 ' 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:31.506 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:02:31.507 Cannot find device "nvmf_init_br" 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 01:02:31.507 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:02:31.767 Cannot find device "nvmf_init_br2" 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:02:31.767 Cannot find device "nvmf_tgt_br" 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:02:31.767 Cannot find device "nvmf_tgt_br2" 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:02:31.767 Cannot find device "nvmf_init_br" 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:02:31.767 Cannot find device "nvmf_init_br2" 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:02:31.767 Cannot find device "nvmf_tgt_br" 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:02:31.767 Cannot find device "nvmf_tgt_br2" 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:02:31.767 Cannot find device "nvmf_br" 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:02:31.767 Cannot find device "nvmf_init_if" 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:02:31.767 Cannot find device "nvmf_init_if2" 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:31.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:31.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:31.767 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:02:32.027 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:02:32.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:32.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.126 ms 01:02:32.028 01:02:32.028 --- 10.0.0.3 ping statistics --- 01:02:32.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:32.028 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:02:32.028 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:02:32.028 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 01:02:32.028 01:02:32.028 --- 10.0.0.4 ping statistics --- 01:02:32.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:32.028 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:32.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:32.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 01:02:32.028 01:02:32.028 --- 10.0.0.1 ping statistics --- 01:02:32.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:32.028 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:02:32.028 10:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:02:32.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:32.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 01:02:32.028 01:02:32.028 --- 10.0.0.2 ping statistics --- 01:02:32.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:32.028 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=107345 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 107345 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 107345 ']' 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:32.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:32.028 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:02:32.288 [2024-12-09 10:13:39.102663] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:02:32.288 [2024-12-09 10:13:39.103522] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:02:32.288 [2024-12-09 10:13:39.103565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:32.288 [2024-12-09 10:13:39.239645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:02:32.288 [2024-12-09 10:13:39.292925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:32.288 [2024-12-09 10:13:39.292981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:32.288 [2024-12-09 10:13:39.293004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:32.288 [2024-12-09 10:13:39.293009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:32.288 [2024-12-09 10:13:39.293013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:32.288 [2024-12-09 10:13:39.293865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:32.288 [2024-12-09 10:13:39.294014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:32.288 [2024-12-09 10:13:39.294119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:32.288 [2024-12-09 10:13:39.294120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:02:32.547 [2024-12-09 10:13:39.364697] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:02:32.547 [2024-12-09 10:13:39.365705] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:02:32.547 [2024-12-09 10:13:39.366021] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:02:32.547 [2024-12-09 10:13:39.366370] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:02:32.547 [2024-12-09 10:13:39.366618] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:02:33.116 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:33.116 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 01:02:33.116 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:33.116 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:33.116 10:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:02:33.116 10:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:33.116 10:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:02:33.375 [2024-12-09 10:13:40.211174] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:33.375 10:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:02:33.635 10:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 01:02:33.635 10:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:02:33.895 10:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 01:02:33.895 10:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:02:34.154 10:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 01:02:34.154 10:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:02:34.154 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 01:02:34.154 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 01:02:34.414 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:02:34.697 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 01:02:34.697 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:02:34.956 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 01:02:34.956 10:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:02:35.216 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 01:02:35.216 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 01:02:35.475 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:02:35.475 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:02:35.475 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:02:35.736 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:02:35.736 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:02:35.995 10:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:02:35.995 [2024-12-09 10:13:43.035189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:36.255 10:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 01:02:36.255 10:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 01:02:36.530 10:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:02:36.530 10:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 01:02:36.530 10:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 01:02:36.530 10:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:02:36.530 10:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 01:02:36.530 10:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 01:02:36.530 10:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 01:02:39.069 10:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:02:39.069 10:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:02:39.069 10:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:02:39.069 10:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 01:02:39.069 10:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:02:39.069 10:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 01:02:39.069 10:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:02:39.069 [global] 01:02:39.069 thread=1 01:02:39.069 invalidate=1 01:02:39.069 rw=write 01:02:39.069 time_based=1 01:02:39.069 runtime=1 01:02:39.069 ioengine=libaio 01:02:39.069 direct=1 01:02:39.069 bs=4096 01:02:39.069 iodepth=1 01:02:39.069 norandommap=0 01:02:39.069 numjobs=1 01:02:39.069 01:02:39.069 verify_dump=1 01:02:39.069 verify_backlog=512 01:02:39.069 verify_state_save=0 01:02:39.069 do_verify=1 01:02:39.069 verify=crc32c-intel 01:02:39.069 [job0] 01:02:39.069 filename=/dev/nvme0n1 01:02:39.069 [job1] 01:02:39.069 filename=/dev/nvme0n2 01:02:39.069 [job2] 01:02:39.069 filename=/dev/nvme0n3 01:02:39.069 [job3] 01:02:39.069 filename=/dev/nvme0n4 01:02:39.069 Could not set queue depth (nvme0n1) 01:02:39.069 Could not set queue depth (nvme0n2) 01:02:39.069 Could not set queue depth (nvme0n3) 01:02:39.069 Could not set queue depth (nvme0n4) 01:02:39.069 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:39.069 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:39.069 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:39.069 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:39.069 fio-3.35 01:02:39.069 Starting 4 threads 01:02:40.051 01:02:40.051 job0: (groupid=0, jobs=1): err= 0: pid=107633: Mon Dec 9 10:13:46 2024 01:02:40.051 read: IOPS=1484, BW=5938KiB/s (6081kB/s)(5944KiB/1001msec) 01:02:40.051 slat (usec): min=5, max=100, avg=12.30, stdev= 5.95 01:02:40.051 clat (usec): min=174, max=1018, avg=360.51, stdev=64.90 01:02:40.051 lat (usec): min=186, max=1044, avg=372.81, stdev=66.39 01:02:40.051 clat percentiles (usec): 01:02:40.051 | 1.00th=[ 192], 5.00th=[ 253], 10.00th=[ 306], 20.00th=[ 326], 01:02:40.051 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 363], 01:02:40.051 | 70.00th=[ 375], 80.00th=[ 400], 90.00th=[ 441], 95.00th=[ 482], 01:02:40.051 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 635], 99.95th=[ 1020], 01:02:40.051 | 99.99th=[ 1020] 01:02:40.051 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 01:02:40.051 slat (usec): min=8, max=155, avg=20.95, stdev= 8.82 01:02:40.051 clat (usec): min=131, max=586, avg=266.91, stdev=36.32 01:02:40.051 lat (usec): min=149, max=600, avg=287.86, stdev=36.28 01:02:40.051 clat percentiles (usec): 01:02:40.051 | 1.00th=[ 192], 5.00th=[ 215], 10.00th=[ 227], 20.00th=[ 241], 01:02:40.051 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 01:02:40.051 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 330], 01:02:40.051 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 553], 99.95th=[ 586], 01:02:40.051 | 99.99th=[ 586] 01:02:40.051 bw ( KiB/s): min= 8192, max= 8192, per=33.40%, avg=8192.00, stdev= 0.00, samples=1 01:02:40.051 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:02:40.051 lat (usec) : 250=18.13%, 500=80.15%, 750=1.69% 01:02:40.051 lat (msec) : 2=0.03% 01:02:40.051 cpu : usr=0.50%, sys=4.00%, ctx=3030, majf=0, minf=9 01:02:40.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:40.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:40.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:40.051 issued rwts: total=1486,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:40.051 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:40.051 job1: (groupid=0, jobs=1): err= 0: pid=107634: Mon Dec 9 10:13:46 2024 01:02:40.051 read: IOPS=1393, BW=5574KiB/s (5708kB/s)(5580KiB/1001msec) 01:02:40.051 slat (usec): min=21, max=109, avg=32.86, stdev= 5.57 01:02:40.051 clat (usec): min=162, max=975, avg=329.04, stdev=42.32 01:02:40.051 lat (usec): min=187, max=1002, avg=361.90, stdev=42.81 01:02:40.051 clat percentiles (usec): 01:02:40.051 | 1.00th=[ 212], 5.00th=[ 262], 10.00th=[ 277], 20.00th=[ 297], 01:02:40.051 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 01:02:40.051 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 383], 01:02:40.051 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 465], 99.95th=[ 979], 01:02:40.051 | 99.99th=[ 979] 01:02:40.051 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 01:02:40.051 slat (usec): min=34, max=147, avg=47.98, stdev= 6.75 01:02:40.051 clat (usec): min=149, max=413, avg=267.15, stdev=35.36 01:02:40.051 lat (usec): min=184, max=524, avg=315.13, stdev=36.80 01:02:40.051 clat percentiles (usec): 01:02:40.051 | 1.00th=[ 184], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 239], 01:02:40.051 | 30.00th=[ 249], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 01:02:40.051 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 326], 01:02:40.051 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 375], 99.95th=[ 412], 01:02:40.051 | 99.99th=[ 412] 01:02:40.051 bw ( KiB/s): min= 7832, max= 7832, per=31.93%, avg=7832.00, stdev= 0.00, samples=1 01:02:40.051 iops : min= 1958, max= 1958, avg=1958.00, stdev= 0.00, samples=1 01:02:40.051 lat (usec) : 250=17.95%, 500=82.02%, 1000=0.03% 01:02:40.051 cpu : usr=2.10%, sys=9.30%, ctx=2932, majf=0, minf=14 01:02:40.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:40.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:40.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:40.051 issued rwts: total=1395,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:40.051 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:40.051 job2: (groupid=0, jobs=1): err= 0: pid=107635: Mon Dec 9 10:13:46 2024 01:02:40.051 read: IOPS=1373, BW=5493KiB/s (5625kB/s)(5504KiB/1002msec) 01:02:40.051 slat (nsec): min=27596, max=94251, avg=34585.81, stdev=4994.29 01:02:40.051 clat (usec): min=191, max=1419, avg=330.46, stdev=50.19 01:02:40.051 lat (usec): min=224, max=1454, avg=365.04, stdev=50.45 01:02:40.051 clat percentiles (usec): 01:02:40.051 | 1.00th=[ 215], 5.00th=[ 247], 10.00th=[ 277], 20.00th=[ 302], 01:02:40.051 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 347], 01:02:40.051 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 375], 95.00th=[ 388], 01:02:40.051 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 424], 99.95th=[ 1418], 01:02:40.051 | 99.99th=[ 1418] 01:02:40.051 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 01:02:40.051 slat (usec): min=38, max=119, avg=50.72, stdev= 6.88 01:02:40.051 clat (usec): min=147, max=373, avg=266.64, stdev=36.48 01:02:40.051 lat (usec): min=196, max=457, avg=317.36, stdev=37.13 01:02:40.051 clat percentiles (usec): 01:02:40.051 | 1.00th=[ 174], 5.00th=[ 202], 10.00th=[ 221], 20.00th=[ 239], 01:02:40.051 | 30.00th=[ 249], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 01:02:40.051 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 326], 01:02:40.051 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 363], 99.95th=[ 375], 01:02:40.051 | 99.99th=[ 375] 01:02:40.051 bw ( KiB/s): min= 7768, max= 7768, per=31.67%, avg=7768.00, stdev= 0.00, samples=1 01:02:40.051 iops : min= 1942, max= 1942, avg=1942.00, stdev= 0.00, samples=1 01:02:40.051 lat (usec) : 250=18.92%, 500=81.04% 01:02:40.051 lat (msec) : 2=0.03% 01:02:40.051 cpu : usr=2.30%, sys=9.39%, ctx=2913, majf=0, minf=13 01:02:40.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:40.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:40.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:40.051 issued rwts: total=1376,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:40.051 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:40.051 job3: (groupid=0, jobs=1): err= 0: pid=107636: Mon Dec 9 10:13:46 2024 01:02:40.051 read: IOPS=1485, BW=5942KiB/s (6085kB/s)(5948KiB/1001msec) 01:02:40.051 slat (nsec): min=6154, max=78259, avg=11395.31, stdev=4534.58 01:02:40.051 clat (usec): min=178, max=846, avg=361.17, stdev=59.83 01:02:40.051 lat (usec): min=189, max=861, avg=372.57, stdev=60.82 01:02:40.051 clat percentiles (usec): 01:02:40.051 | 1.00th=[ 206], 5.00th=[ 265], 10.00th=[ 306], 20.00th=[ 326], 01:02:40.051 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 367], 01:02:40.051 | 70.00th=[ 379], 80.00th=[ 404], 90.00th=[ 441], 95.00th=[ 474], 01:02:40.051 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 627], 99.95th=[ 848], 01:02:40.051 | 99.99th=[ 848] 01:02:40.051 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 01:02:40.051 slat (nsec): min=9304, max=71430, avg=22794.77, stdev=7859.56 01:02:40.051 clat (usec): min=159, max=602, avg=265.10, stdev=36.89 01:02:40.051 lat (usec): min=184, max=632, avg=287.89, stdev=37.09 01:02:40.051 clat percentiles (usec): 01:02:40.051 | 1.00th=[ 190], 5.00th=[ 212], 10.00th=[ 225], 20.00th=[ 239], 01:02:40.051 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 01:02:40.051 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 330], 01:02:40.051 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 578], 99.95th=[ 603], 01:02:40.051 | 99.99th=[ 603] 01:02:40.051 bw ( KiB/s): min= 8192, max= 8192, per=33.40%, avg=8192.00, stdev= 0.00, samples=1 01:02:40.051 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:02:40.051 lat (usec) : 250=18.72%, 500=80.38%, 750=0.86%, 1000=0.03% 01:02:40.051 cpu : usr=0.70%, sys=3.90%, ctx=3026, majf=0, minf=13 01:02:40.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:40.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:40.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:40.052 issued rwts: total=1487,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:40.052 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:40.052 01:02:40.052 Run status group 0 (all jobs): 01:02:40.052 READ: bw=22.4MiB/s (23.5MB/s), 5493KiB/s-5942KiB/s (5625kB/s-6085kB/s), io=22.4MiB (23.5MB), run=1001-1002msec 01:02:40.052 WRITE: bw=24.0MiB/s (25.1MB/s), 6132KiB/s-6138KiB/s (6279kB/s-6285kB/s), io=24.0MiB (25.2MB), run=1001-1002msec 01:02:40.052 01:02:40.052 Disk stats (read/write): 01:02:40.052 nvme0n1: ios=1239/1536, merge=0/0, ticks=437/407, in_queue=844, util=90.07% 01:02:40.052 nvme0n2: ios=1118/1536, merge=0/0, ticks=375/446, in_queue=821, util=90.63% 01:02:40.052 nvme0n3: ios=1098/1536, merge=0/0, ticks=385/451, in_queue=836, util=90.84% 01:02:40.052 nvme0n4: ios=1223/1536, merge=0/0, ticks=452/417, in_queue=869, util=90.62% 01:02:40.052 10:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 01:02:40.052 [global] 01:02:40.052 thread=1 01:02:40.052 invalidate=1 01:02:40.052 rw=randwrite 01:02:40.052 time_based=1 01:02:40.052 runtime=1 01:02:40.052 ioengine=libaio 01:02:40.052 direct=1 01:02:40.052 bs=4096 01:02:40.052 iodepth=1 01:02:40.052 norandommap=0 01:02:40.052 numjobs=1 01:02:40.052 01:02:40.052 verify_dump=1 01:02:40.052 verify_backlog=512 01:02:40.052 verify_state_save=0 01:02:40.052 do_verify=1 01:02:40.052 verify=crc32c-intel 01:02:40.052 [job0] 01:02:40.052 filename=/dev/nvme0n1 01:02:40.052 [job1] 01:02:40.052 filename=/dev/nvme0n2 01:02:40.052 [job2] 01:02:40.052 filename=/dev/nvme0n3 01:02:40.052 [job3] 01:02:40.052 filename=/dev/nvme0n4 01:02:40.311 Could not set queue depth (nvme0n1) 01:02:40.311 Could not set queue depth (nvme0n2) 01:02:40.311 Could not set queue depth (nvme0n3) 01:02:40.312 Could not set queue depth (nvme0n4) 01:02:40.312 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:40.312 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:40.312 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:40.312 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:40.312 fio-3.35 01:02:40.312 Starting 4 threads 01:02:41.693 01:02:41.693 job0: (groupid=0, jobs=1): err= 0: pid=107690: Mon Dec 9 10:13:48 2024 01:02:41.693 read: IOPS=1325, BW=5303KiB/s (5430kB/s)(5308KiB/1001msec) 01:02:41.693 slat (nsec): min=7115, max=86521, avg=23158.28, stdev=7906.17 01:02:41.693 clat (usec): min=173, max=5605, avg=377.27, stdev=168.40 01:02:41.693 lat (usec): min=195, max=5659, avg=400.43, stdev=168.48 01:02:41.693 clat percentiles (usec): 01:02:41.693 | 1.00th=[ 212], 5.00th=[ 262], 10.00th=[ 277], 20.00th=[ 310], 01:02:41.693 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 388], 01:02:41.693 | 70.00th=[ 408], 80.00th=[ 433], 90.00th=[ 469], 95.00th=[ 502], 01:02:41.693 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 2040], 99.95th=[ 5604], 01:02:41.693 | 99.99th=[ 5604] 01:02:41.693 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 01:02:41.693 slat (usec): min=14, max=117, avg=44.63, stdev= 7.93 01:02:41.693 clat (usec): min=142, max=5872, avg=255.27, stdev=191.50 01:02:41.693 lat (usec): min=181, max=5917, avg=299.90, stdev=192.55 01:02:41.693 clat percentiles (usec): 01:02:41.693 | 1.00th=[ 172], 5.00th=[ 188], 10.00th=[ 200], 20.00th=[ 215], 01:02:41.693 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 253], 01:02:41.693 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 318], 01:02:41.693 | 99.00th=[ 363], 99.50th=[ 433], 99.90th=[ 4686], 99.95th=[ 5866], 01:02:41.693 | 99.99th=[ 5866] 01:02:41.693 bw ( KiB/s): min= 8192, max= 8192, per=28.07%, avg=8192.00, stdev= 0.00, samples=1 01:02:41.693 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:02:41.693 lat (usec) : 250=31.89%, 500=65.42%, 750=2.41%, 1000=0.03% 01:02:41.693 lat (msec) : 2=0.10%, 4=0.03%, 10=0.10% 01:02:41.693 cpu : usr=1.50%, sys=7.90%, ctx=2865, majf=0, minf=17 01:02:41.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:41.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:41.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:41.693 issued rwts: total=1327,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:41.693 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:41.693 job1: (groupid=0, jobs=1): err= 0: pid=107691: Mon Dec 9 10:13:48 2024 01:02:41.693 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 01:02:41.693 slat (usec): min=11, max=153, avg=19.31, stdev= 6.10 01:02:41.693 clat (usec): min=228, max=670, avg=297.50, stdev=28.50 01:02:41.693 lat (usec): min=248, max=690, avg=316.81, stdev=29.43 01:02:41.693 clat percentiles (usec): 01:02:41.693 | 1.00th=[ 247], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 281], 01:02:41.693 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 01:02:41.693 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 326], 95.00th=[ 334], 01:02:41.693 | 99.00th=[ 371], 99.50th=[ 424], 99.90th=[ 668], 99.95th=[ 668], 01:02:41.693 | 99.99th=[ 668] 01:02:41.693 write: IOPS=2007, BW=8032KiB/s (8225kB/s)(8040KiB/1001msec); 0 zone resets 01:02:41.693 slat (usec): min=12, max=110, avg=24.16, stdev= 8.94 01:02:41.693 clat (usec): min=111, max=592, avg=228.43, stdev=43.65 01:02:41.693 lat (usec): min=136, max=632, avg=252.59, stdev=44.68 01:02:41.693 clat percentiles (usec): 01:02:41.693 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 198], 01:02:41.693 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 227], 01:02:41.693 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 285], 95.00th=[ 326], 01:02:41.693 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 424], 99.95th=[ 586], 01:02:41.693 | 99.99th=[ 594] 01:02:41.693 bw ( KiB/s): min= 8192, max= 8192, per=28.07%, avg=8192.00, stdev= 0.00, samples=1 01:02:41.693 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:02:41.693 lat (usec) : 250=47.12%, 500=52.71%, 750=0.17% 01:02:41.693 cpu : usr=1.00%, sys=5.60%, ctx=3557, majf=0, minf=15 01:02:41.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:41.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:41.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:41.693 issued rwts: total=1536,2010,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:41.693 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:41.693 job2: (groupid=0, jobs=1): err= 0: pid=107692: Mon Dec 9 10:13:48 2024 01:02:41.693 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 01:02:41.693 slat (nsec): min=6985, max=85903, avg=13067.87, stdev=6665.24 01:02:41.693 clat (usec): min=157, max=692, avg=246.71, stdev=46.23 01:02:41.693 lat (usec): min=166, max=733, avg=259.78, stdev=48.43 01:02:41.693 clat percentiles (usec): 01:02:41.693 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 200], 01:02:41.693 | 30.00th=[ 212], 40.00th=[ 229], 50.00th=[ 245], 60.00th=[ 262], 01:02:41.693 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 322], 01:02:41.693 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 392], 99.95th=[ 429], 01:02:41.693 | 99.99th=[ 693] 01:02:41.693 write: IOPS=2218, BW=8875KiB/s (9088kB/s)(8884KiB/1001msec); 0 zone resets 01:02:41.693 slat (nsec): min=9983, max=84296, avg=20848.54, stdev=12192.80 01:02:41.693 clat (usec): min=97, max=1342, avg=186.95, stdev=51.18 01:02:41.693 lat (usec): min=108, max=1354, avg=207.80, stdev=58.20 01:02:41.693 clat percentiles (usec): 01:02:41.693 | 1.00th=[ 114], 5.00th=[ 126], 10.00th=[ 133], 20.00th=[ 141], 01:02:41.693 | 30.00th=[ 153], 40.00th=[ 169], 50.00th=[ 184], 60.00th=[ 200], 01:02:41.693 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 260], 01:02:41.693 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 445], 99.95th=[ 529], 01:02:41.693 | 99.99th=[ 1336] 01:02:41.693 bw ( KiB/s): min= 8312, max= 8312, per=28.48%, avg=8312.00, stdev= 0.00, samples=1 01:02:41.693 iops : min= 2078, max= 2078, avg=2078.00, stdev= 0.00, samples=1 01:02:41.693 lat (usec) : 100=0.05%, 250=73.74%, 500=26.14%, 750=0.05% 01:02:41.693 lat (msec) : 2=0.02% 01:02:41.693 cpu : usr=0.90%, sys=5.90%, ctx=4271, majf=0, minf=7 01:02:41.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:41.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:41.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:41.693 issued rwts: total=2048,2221,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:41.693 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:41.693 job3: (groupid=0, jobs=1): err= 0: pid=107693: Mon Dec 9 10:13:48 2024 01:02:41.693 read: IOPS=1448, BW=5794KiB/s (5933kB/s)(5800KiB/1001msec) 01:02:41.693 slat (nsec): min=7514, max=71540, avg=23428.73, stdev=7855.79 01:02:41.693 clat (usec): min=170, max=3519, avg=349.03, stdev=124.69 01:02:41.693 lat (usec): min=183, max=3566, avg=372.46, stdev=125.21 01:02:41.693 clat percentiles (usec): 01:02:41.693 | 1.00th=[ 194], 5.00th=[ 229], 10.00th=[ 249], 20.00th=[ 273], 01:02:41.693 | 30.00th=[ 297], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 359], 01:02:41.693 | 70.00th=[ 388], 80.00th=[ 420], 90.00th=[ 449], 95.00th=[ 478], 01:02:41.693 | 99.00th=[ 553], 99.50th=[ 594], 99.90th=[ 2147], 99.95th=[ 3523], 01:02:41.693 | 99.99th=[ 3523] 01:02:41.693 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 01:02:41.693 slat (usec): min=30, max=113, avg=45.03, stdev= 8.19 01:02:41.693 clat (usec): min=145, max=1638, avg=248.87, stdev=52.42 01:02:41.693 lat (usec): min=193, max=1677, avg=293.90, stdev=53.15 01:02:41.693 clat percentiles (usec): 01:02:41.693 | 1.00th=[ 176], 5.00th=[ 192], 10.00th=[ 202], 20.00th=[ 217], 01:02:41.693 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 255], 01:02:41.693 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 314], 01:02:41.693 | 99.00th=[ 371], 99.50th=[ 396], 99.90th=[ 445], 99.95th=[ 1631], 01:02:41.693 | 99.99th=[ 1631] 01:02:41.693 bw ( KiB/s): min= 8192, max= 8192, per=28.07%, avg=8192.00, stdev= 0.00, samples=1 01:02:41.693 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:02:41.694 lat (usec) : 250=33.52%, 500=65.07%, 750=1.24%, 1000=0.07% 01:02:41.694 lat (msec) : 2=0.03%, 4=0.07% 01:02:41.694 cpu : usr=1.50%, sys=8.30%, ctx=2987, majf=0, minf=7 01:02:41.694 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:41.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:41.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:41.694 issued rwts: total=1450,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:41.694 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:41.694 01:02:41.694 Run status group 0 (all jobs): 01:02:41.694 READ: bw=24.8MiB/s (26.0MB/s), 5303KiB/s-8184KiB/s (5430kB/s-8380kB/s), io=24.8MiB (26.1MB), run=1001-1001msec 01:02:41.694 WRITE: bw=28.5MiB/s (29.9MB/s), 6138KiB/s-8875KiB/s (6285kB/s-9088kB/s), io=28.5MiB (29.9MB), run=1001-1001msec 01:02:41.694 01:02:41.694 Disk stats (read/write): 01:02:41.694 nvme0n1: ios=1112/1536, merge=0/0, ticks=408/416, in_queue=824, util=89.37% 01:02:41.694 nvme0n2: ios=1585/1599, merge=0/0, ticks=496/364, in_queue=860, util=90.62% 01:02:41.694 nvme0n3: ios=1859/2048, merge=0/0, ticks=464/385, in_queue=849, util=90.93% 01:02:41.694 nvme0n4: ios=1159/1536, merge=0/0, ticks=426/401, in_queue=827, util=90.59% 01:02:41.694 10:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 01:02:41.694 [global] 01:02:41.694 thread=1 01:02:41.694 invalidate=1 01:02:41.694 rw=write 01:02:41.694 time_based=1 01:02:41.694 runtime=1 01:02:41.694 ioengine=libaio 01:02:41.694 direct=1 01:02:41.694 bs=4096 01:02:41.694 iodepth=128 01:02:41.694 norandommap=0 01:02:41.694 numjobs=1 01:02:41.694 01:02:41.694 verify_dump=1 01:02:41.694 verify_backlog=512 01:02:41.694 verify_state_save=0 01:02:41.694 do_verify=1 01:02:41.694 verify=crc32c-intel 01:02:41.694 [job0] 01:02:41.694 filename=/dev/nvme0n1 01:02:41.694 [job1] 01:02:41.694 filename=/dev/nvme0n2 01:02:41.694 [job2] 01:02:41.694 filename=/dev/nvme0n3 01:02:41.694 [job3] 01:02:41.694 filename=/dev/nvme0n4 01:02:41.694 Could not set queue depth (nvme0n1) 01:02:41.694 Could not set queue depth (nvme0n2) 01:02:41.694 Could not set queue depth (nvme0n3) 01:02:41.694 Could not set queue depth (nvme0n4) 01:02:41.694 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:02:41.694 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:02:41.694 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:02:41.694 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:02:41.694 fio-3.35 01:02:41.694 Starting 4 threads 01:02:43.075 01:02:43.075 job0: (groupid=0, jobs=1): err= 0: pid=107746: Mon Dec 9 10:13:49 2024 01:02:43.075 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 01:02:43.075 slat (usec): min=7, max=10351, avg=184.93, stdev=1008.47 01:02:43.075 clat (usec): min=6046, max=41531, avg=23780.65, stdev=6405.87 01:02:43.075 lat (usec): min=6065, max=41562, avg=23965.58, stdev=6496.10 01:02:43.075 clat percentiles (usec): 01:02:43.075 | 1.00th=[ 6783], 5.00th=[14222], 10.00th=[16057], 20.00th=[17695], 01:02:43.075 | 30.00th=[18744], 40.00th=[22152], 50.00th=[24773], 60.00th=[25822], 01:02:43.075 | 70.00th=[28967], 80.00th=[29754], 90.00th=[31065], 95.00th=[33162], 01:02:43.075 | 99.00th=[36439], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 01:02:43.075 | 99.99th=[41681] 01:02:43.075 write: IOPS=2566, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1003msec); 0 zone resets 01:02:43.075 slat (usec): min=22, max=10667, avg=194.02, stdev=946.88 01:02:43.075 clat (usec): min=2372, max=48137, avg=25347.36, stdev=7984.78 01:02:43.075 lat (usec): min=2403, max=48168, avg=25541.38, stdev=8068.41 01:02:43.075 clat percentiles (usec): 01:02:43.075 | 1.00th=[11469], 5.00th=[12911], 10.00th=[15533], 20.00th=[16581], 01:02:43.075 | 30.00th=[21890], 40.00th=[23462], 50.00th=[25035], 60.00th=[27395], 01:02:43.075 | 70.00th=[29230], 80.00th=[31327], 90.00th=[34866], 95.00th=[40109], 01:02:43.075 | 99.00th=[46924], 99.50th=[46924], 99.90th=[47973], 99.95th=[47973], 01:02:43.075 | 99.99th=[47973] 01:02:43.075 bw ( KiB/s): min= 8968, max=11512, per=23.18%, avg=10240.00, stdev=1798.88, samples=2 01:02:43.075 iops : min= 2242, max= 2878, avg=2560.00, stdev=449.72, samples=2 01:02:43.075 lat (msec) : 4=0.27%, 10=0.76%, 20=29.78%, 50=69.19% 01:02:43.075 cpu : usr=3.59%, sys=9.58%, ctx=244, majf=0, minf=13 01:02:43.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 01:02:43.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:43.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:02:43.075 issued rwts: total=2560,2574,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:43.075 latency : target=0, window=0, percentile=100.00%, depth=128 01:02:43.075 job1: (groupid=0, jobs=1): err= 0: pid=107747: Mon Dec 9 10:13:49 2024 01:02:43.075 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 01:02:43.075 slat (usec): min=16, max=12868, avg=162.13, stdev=877.95 01:02:43.075 clat (usec): min=12168, max=49185, avg=20910.54, stdev=8235.76 01:02:43.075 lat (usec): min=12652, max=50474, avg=21072.68, stdev=8311.66 01:02:43.075 clat percentiles (usec): 01:02:43.075 | 1.00th=[12911], 5.00th=[15401], 10.00th=[16188], 20.00th=[16712], 01:02:43.075 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17695], 60.00th=[18220], 01:02:43.075 | 70.00th=[18482], 80.00th=[19530], 90.00th=[38011], 95.00th=[41157], 01:02:43.075 | 99.00th=[46924], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 01:02:43.075 | 99.99th=[49021] 01:02:43.075 write: IOPS=3368, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1003msec); 0 zone resets 01:02:43.075 slat (usec): min=18, max=7264, avg=138.82, stdev=639.75 01:02:43.075 clat (usec): min=1893, max=44667, avg=18416.60, stdev=4428.95 01:02:43.075 lat (usec): min=5947, max=44715, avg=18555.42, stdev=4423.04 01:02:43.075 clat percentiles (usec): 01:02:43.075 | 1.00th=[11600], 5.00th=[13173], 10.00th=[14615], 20.00th=[16450], 01:02:43.075 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17695], 60.00th=[18220], 01:02:43.075 | 70.00th=[18482], 80.00th=[19530], 90.00th=[21103], 95.00th=[29492], 01:02:43.075 | 99.00th=[33817], 99.50th=[35914], 99.90th=[44827], 99.95th=[44827], 01:02:43.075 | 99.99th=[44827] 01:02:43.075 bw ( KiB/s): min=10304, max=15735, per=29.47%, avg=13019.50, stdev=3840.30, samples=2 01:02:43.075 iops : min= 2576, max= 3933, avg=3254.50, stdev=959.54, samples=2 01:02:43.075 lat (msec) : 2=0.02%, 10=0.48%, 20=83.34%, 50=16.17% 01:02:43.075 cpu : usr=4.39%, sys=12.08%, ctx=331, majf=0, minf=11 01:02:43.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 01:02:43.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:43.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:02:43.075 issued rwts: total=3072,3379,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:43.075 latency : target=0, window=0, percentile=100.00%, depth=128 01:02:43.075 job2: (groupid=0, jobs=1): err= 0: pid=107748: Mon Dec 9 10:13:49 2024 01:02:43.075 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 01:02:43.075 slat (usec): min=7, max=4492, avg=139.34, stdev=635.55 01:02:43.075 clat (usec): min=4668, max=22304, avg=17828.56, stdev=1950.36 01:02:43.075 lat (usec): min=4687, max=23537, avg=17967.90, stdev=1878.67 01:02:43.075 clat percentiles (usec): 01:02:43.075 | 1.00th=[ 9241], 5.00th=[14877], 10.00th=[15926], 20.00th=[16909], 01:02:43.075 | 30.00th=[17171], 40.00th=[17957], 50.00th=[18482], 60.00th=[18482], 01:02:43.075 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19268], 95.00th=[19530], 01:02:43.076 | 99.00th=[20841], 99.50th=[21103], 99.90th=[22152], 99.95th=[22414], 01:02:43.076 | 99.99th=[22414] 01:02:43.076 write: IOPS=3579, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 01:02:43.076 slat (usec): min=21, max=4235, avg=129.12, stdev=411.98 01:02:43.076 clat (usec): min=897, max=21792, avg=17396.72, stdev=1755.71 01:02:43.076 lat (usec): min=955, max=21965, avg=17525.84, stdev=1745.66 01:02:43.076 clat percentiles (usec): 01:02:43.076 | 1.00th=[13566], 5.00th=[14353], 10.00th=[15008], 20.00th=[15795], 01:02:43.076 | 30.00th=[16712], 40.00th=[17171], 50.00th=[17433], 60.00th=[17957], 01:02:43.076 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19268], 95.00th=[20579], 01:02:43.076 | 99.00th=[21103], 99.50th=[21365], 99.90th=[21890], 99.95th=[21890], 01:02:43.076 | 99.99th=[21890] 01:02:43.076 bw ( KiB/s): min=12544, max=16160, per=32.49%, avg=14352.00, stdev=2556.90, samples=2 01:02:43.076 iops : min= 3136, max= 4040, avg=3588.00, stdev=639.22, samples=2 01:02:43.076 lat (usec) : 1000=0.03% 01:02:43.076 lat (msec) : 10=0.89%, 20=94.52%, 50=4.56% 01:02:43.076 cpu : usr=4.30%, sys=14.29%, ctx=537, majf=0, minf=7 01:02:43.076 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 01:02:43.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:43.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:02:43.076 issued rwts: total=3584,3587,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:43.076 latency : target=0, window=0, percentile=100.00%, depth=128 01:02:43.076 job3: (groupid=0, jobs=1): err= 0: pid=107749: Mon Dec 9 10:13:49 2024 01:02:43.076 read: IOPS=1449, BW=5799KiB/s (5938kB/s)(5816KiB/1003msec) 01:02:43.076 slat (usec): min=4, max=12952, avg=325.26, stdev=1551.27 01:02:43.076 clat (usec): min=2147, max=62225, avg=38888.20, stdev=9095.10 01:02:43.076 lat (usec): min=2165, max=63620, avg=39213.46, stdev=9055.82 01:02:43.076 clat percentiles (usec): 01:02:43.076 | 1.00th=[ 2540], 5.00th=[25822], 10.00th=[27919], 20.00th=[33162], 01:02:43.076 | 30.00th=[35914], 40.00th=[39060], 50.00th=[40109], 60.00th=[41681], 01:02:43.076 | 70.00th=[42730], 80.00th=[46400], 90.00th=[47973], 95.00th=[50594], 01:02:43.076 | 99.00th=[55837], 99.50th=[57934], 99.90th=[62129], 99.95th=[62129], 01:02:43.076 | 99.99th=[62129] 01:02:43.076 write: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec); 0 zone resets 01:02:43.076 slat (usec): min=6, max=11582, avg=333.65, stdev=1292.35 01:02:43.076 clat (usec): min=26927, max=69584, avg=45005.23, stdev=12950.43 01:02:43.076 lat (usec): min=27040, max=69616, avg=45338.88, stdev=13008.04 01:02:43.076 clat percentiles (usec): 01:02:43.076 | 1.00th=[28443], 5.00th=[29492], 10.00th=[31589], 20.00th=[34341], 01:02:43.076 | 30.00th=[36963], 40.00th=[38011], 50.00th=[39584], 60.00th=[42206], 01:02:43.076 | 70.00th=[52691], 80.00th=[60556], 90.00th=[67634], 95.00th=[68682], 01:02:43.076 | 99.00th=[68682], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 01:02:43.076 | 99.99th=[69731] 01:02:43.076 bw ( KiB/s): min= 5835, max= 6464, per=13.92%, avg=6149.50, stdev=444.77, samples=2 01:02:43.076 iops : min= 1458, max= 1616, avg=1537.00, stdev=111.72, samples=2 01:02:43.076 lat (msec) : 4=0.67%, 20=1.07%, 50=79.00%, 100=19.26% 01:02:43.076 cpu : usr=2.00%, sys=5.49%, ctx=227, majf=0, minf=17 01:02:43.076 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 01:02:43.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:43.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:02:43.076 issued rwts: total=1454,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:43.076 latency : target=0, window=0, percentile=100.00%, depth=128 01:02:43.076 01:02:43.076 Run status group 0 (all jobs): 01:02:43.076 READ: bw=41.6MiB/s (43.6MB/s), 5799KiB/s-14.0MiB/s (5938kB/s-14.7MB/s), io=41.7MiB (43.7MB), run=1002-1003msec 01:02:43.076 WRITE: bw=43.1MiB/s (45.2MB/s), 6126KiB/s-14.0MiB/s (6273kB/s-14.7MB/s), io=43.3MiB (45.4MB), run=1002-1003msec 01:02:43.076 01:02:43.076 Disk stats (read/write): 01:02:43.076 nvme0n1: ios=2098/2089, merge=0/0, ticks=16744/17573, in_queue=34317, util=90.18% 01:02:43.076 nvme0n2: ios=2938/3072, merge=0/0, ticks=16624/15349, in_queue=31973, util=90.24% 01:02:43.076 nvme0n3: ios=3115/3291, merge=0/0, ticks=12904/12860, in_queue=25764, util=90.76% 01:02:43.076 nvme0n4: ios=1091/1536, merge=0/0, ticks=10293/16570, in_queue=26863, util=90.53% 01:02:43.076 10:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 01:02:43.076 [global] 01:02:43.076 thread=1 01:02:43.076 invalidate=1 01:02:43.076 rw=randwrite 01:02:43.076 time_based=1 01:02:43.076 runtime=1 01:02:43.076 ioengine=libaio 01:02:43.076 direct=1 01:02:43.076 bs=4096 01:02:43.076 iodepth=128 01:02:43.076 norandommap=0 01:02:43.076 numjobs=1 01:02:43.076 01:02:43.076 verify_dump=1 01:02:43.076 verify_backlog=512 01:02:43.076 verify_state_save=0 01:02:43.076 do_verify=1 01:02:43.076 verify=crc32c-intel 01:02:43.076 [job0] 01:02:43.076 filename=/dev/nvme0n1 01:02:43.076 [job1] 01:02:43.076 filename=/dev/nvme0n2 01:02:43.076 [job2] 01:02:43.076 filename=/dev/nvme0n3 01:02:43.076 [job3] 01:02:43.076 filename=/dev/nvme0n4 01:02:43.076 Could not set queue depth (nvme0n1) 01:02:43.076 Could not set queue depth (nvme0n2) 01:02:43.076 Could not set queue depth (nvme0n3) 01:02:43.076 Could not set queue depth (nvme0n4) 01:02:43.076 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:02:43.076 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:02:43.076 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:02:43.076 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:02:43.076 fio-3.35 01:02:43.076 Starting 4 threads 01:02:44.458 01:02:44.458 job0: (groupid=0, jobs=1): err= 0: pid=107810: Mon Dec 9 10:13:51 2024 01:02:44.458 read: IOPS=2737, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1005msec) 01:02:44.458 slat (usec): min=4, max=6843, avg=170.98, stdev=735.09 01:02:44.458 clat (usec): min=2680, max=42502, avg=21666.70, stdev=4138.04 01:02:44.458 lat (usec): min=6666, max=42548, avg=21837.68, stdev=4181.53 01:02:44.458 clat percentiles (usec): 01:02:44.458 | 1.00th=[ 7898], 5.00th=[16188], 10.00th=[17695], 20.00th=[18744], 01:02:44.458 | 30.00th=[19792], 40.00th=[20841], 50.00th=[21627], 60.00th=[22152], 01:02:44.458 | 70.00th=[22938], 80.00th=[23987], 90.00th=[25822], 95.00th=[28443], 01:02:44.458 | 99.00th=[34866], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 01:02:44.458 | 99.99th=[42730] 01:02:44.458 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 01:02:44.458 slat (usec): min=7, max=8495, avg=162.74, stdev=646.82 01:02:44.458 clat (usec): min=12715, max=50056, avg=21870.96, stdev=7694.91 01:02:44.458 lat (usec): min=12746, max=50096, avg=22033.70, stdev=7750.74 01:02:44.458 clat percentiles (usec): 01:02:44.458 | 1.00th=[14353], 5.00th=[15401], 10.00th=[16188], 20.00th=[17433], 01:02:44.458 | 30.00th=[17957], 40.00th=[18482], 50.00th=[18744], 60.00th=[19530], 01:02:44.458 | 70.00th=[20841], 80.00th=[22676], 90.00th=[34866], 95.00th=[41157], 01:02:44.458 | 99.00th=[46924], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 01:02:44.458 | 99.99th=[50070] 01:02:44.458 bw ( KiB/s): min=11608, max=12968, per=18.99%, avg=12288.00, stdev=961.67, samples=2 01:02:44.458 iops : min= 2902, max= 3242, avg=3072.00, stdev=240.42, samples=2 01:02:44.458 lat (msec) : 4=0.02%, 10=0.72%, 20=47.59%, 50=51.66%, 100=0.02% 01:02:44.458 cpu : usr=3.19%, sys=12.05%, ctx=717, majf=0, minf=7 01:02:44.458 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 01:02:44.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:44.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:02:44.458 issued rwts: total=2751,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:44.458 latency : target=0, window=0, percentile=100.00%, depth=128 01:02:44.458 job1: (groupid=0, jobs=1): err= 0: pid=107811: Mon Dec 9 10:13:51 2024 01:02:44.458 read: IOPS=2631, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1002msec) 01:02:44.458 slat (usec): min=4, max=8100, avg=172.02, stdev=737.24 01:02:44.458 clat (usec): min=1663, max=41083, avg=22601.91, stdev=4949.43 01:02:44.459 lat (usec): min=1683, max=41609, avg=22773.93, stdev=4988.37 01:02:44.459 clat percentiles (usec): 01:02:44.459 | 1.00th=[ 2245], 5.00th=[17957], 10.00th=[19006], 20.00th=[19792], 01:02:44.459 | 30.00th=[20317], 40.00th=[21365], 50.00th=[22152], 60.00th=[22938], 01:02:44.459 | 70.00th=[24249], 80.00th=[25035], 90.00th=[27919], 95.00th=[31327], 01:02:44.459 | 99.00th=[36963], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 01:02:44.459 | 99.99th=[41157] 01:02:44.459 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 01:02:44.459 slat (usec): min=7, max=7125, avg=167.86, stdev=628.52 01:02:44.459 clat (usec): min=11486, max=46693, avg=21702.27, stdev=7474.67 01:02:44.459 lat (usec): min=11518, max=47706, avg=21870.13, stdev=7533.53 01:02:44.459 clat percentiles (usec): 01:02:44.459 | 1.00th=[12256], 5.00th=[14615], 10.00th=[15795], 20.00th=[16909], 01:02:44.459 | 30.00th=[17957], 40.00th=[18482], 50.00th=[19268], 60.00th=[20055], 01:02:44.459 | 70.00th=[20841], 80.00th=[24249], 90.00th=[34866], 95.00th=[40109], 01:02:44.459 | 99.00th=[44827], 99.50th=[45351], 99.90th=[45876], 99.95th=[45876], 01:02:44.459 | 99.99th=[46924] 01:02:44.459 bw ( KiB/s): min=10704, max=13472, per=18.68%, avg=12088.00, stdev=1957.27, samples=2 01:02:44.459 iops : min= 2676, max= 3368, avg=3022.00, stdev=489.32, samples=2 01:02:44.459 lat (msec) : 2=0.26%, 4=0.39%, 10=0.70%, 20=41.92%, 50=56.73% 01:02:44.459 cpu : usr=3.50%, sys=11.39%, ctx=736, majf=0, minf=17 01:02:44.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 01:02:44.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:44.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:02:44.459 issued rwts: total=2637,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:44.459 latency : target=0, window=0, percentile=100.00%, depth=128 01:02:44.459 job2: (groupid=0, jobs=1): err= 0: pid=107812: Mon Dec 9 10:13:51 2024 01:02:44.459 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 01:02:44.459 slat (usec): min=8, max=8081, avg=84.25, stdev=347.91 01:02:44.459 clat (usec): min=8451, max=23415, avg=11423.35, stdev=1887.23 01:02:44.459 lat (usec): min=8693, max=23439, avg=11507.60, stdev=1883.60 01:02:44.459 clat percentiles (usec): 01:02:44.459 | 1.00th=[ 8979], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 01:02:44.459 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[11863], 01:02:44.459 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12780], 95.00th=[13173], 01:02:44.459 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22152], 99.95th=[23462], 01:02:44.459 | 99.99th=[23462] 01:02:44.459 write: IOPS=6008, BW=23.5MiB/s (24.6MB/s)(23.5MiB/1001msec); 0 zone resets 01:02:44.459 slat (usec): min=20, max=2528, avg=77.62, stdev=260.64 01:02:44.459 clat (usec): min=323, max=13312, avg=10340.15, stdev=1272.25 01:02:44.459 lat (usec): min=2114, max=13460, avg=10417.77, stdev=1279.06 01:02:44.459 clat percentiles (usec): 01:02:44.459 | 1.00th=[ 6128], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9503], 01:02:44.459 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10421], 01:02:44.459 | 70.00th=[10945], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 01:02:44.459 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13304], 99.95th=[13304], 01:02:44.459 | 99.99th=[13304] 01:02:44.459 bw ( KiB/s): min=22528, max=24625, per=36.44%, avg=23576.50, stdev=1482.80, samples=2 01:02:44.459 iops : min= 5632, max= 6156, avg=5894.00, stdev=370.52, samples=2 01:02:44.459 lat (usec) : 500=0.01% 01:02:44.459 lat (msec) : 4=0.31%, 10=31.60%, 20=67.27%, 50=0.82% 01:02:44.459 cpu : usr=5.70%, sys=23.30%, ctx=732, majf=0, minf=14 01:02:44.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 01:02:44.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:44.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:02:44.459 issued rwts: total=5632,6015,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:44.459 latency : target=0, window=0, percentile=100.00%, depth=128 01:02:44.459 job3: (groupid=0, jobs=1): err= 0: pid=107813: Mon Dec 9 10:13:51 2024 01:02:44.459 read: IOPS=3769, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1002msec) 01:02:44.459 slat (usec): min=7, max=13303, avg=123.00, stdev=637.77 01:02:44.459 clat (usec): min=1055, max=43934, avg=15600.57, stdev=7199.59 01:02:44.459 lat (usec): min=1970, max=43956, avg=15723.57, stdev=7243.63 01:02:44.459 clat percentiles (usec): 01:02:44.459 | 1.00th=[ 4883], 5.00th=[10683], 10.00th=[11469], 20.00th=[12256], 01:02:44.459 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 01:02:44.459 | 70.00th=[14091], 80.00th=[15795], 90.00th=[27395], 95.00th=[31065], 01:02:44.459 | 99.00th=[43254], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 01:02:44.459 | 99.99th=[43779] 01:02:44.459 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 01:02:44.459 slat (usec): min=22, max=11189, avg=120.88, stdev=572.90 01:02:44.459 clat (usec): min=10001, max=45399, avg=16329.21, stdev=7478.66 01:02:44.459 lat (usec): min=10059, max=45429, avg=16450.09, stdev=7516.95 01:02:44.459 clat percentiles (usec): 01:02:44.459 | 1.00th=[10552], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 01:02:44.459 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12780], 60.00th=[13566], 01:02:44.459 | 70.00th=[16712], 80.00th=[19792], 90.00th=[27919], 95.00th=[32900], 01:02:44.459 | 99.00th=[42206], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 01:02:44.459 | 99.99th=[45351] 01:02:44.459 bw ( KiB/s): min=12312, max=20480, per=25.34%, avg=16396.00, stdev=5775.65, samples=2 01:02:44.459 iops : min= 3078, max= 5120, avg=4099.00, stdev=1443.91, samples=2 01:02:44.459 lat (msec) : 2=0.04%, 4=0.28%, 10=0.88%, 20=80.73%, 50=18.07% 01:02:44.459 cpu : usr=4.49%, sys=15.57%, ctx=533, majf=0, minf=13 01:02:44.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 01:02:44.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:44.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:02:44.459 issued rwts: total=3777,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:44.459 latency : target=0, window=0, percentile=100.00%, depth=128 01:02:44.459 01:02:44.459 Run status group 0 (all jobs): 01:02:44.459 READ: bw=57.5MiB/s (60.3MB/s), 10.3MiB/s-22.0MiB/s (10.8MB/s-23.0MB/s), io=57.8MiB (60.6MB), run=1001-1005msec 01:02:44.459 WRITE: bw=63.2MiB/s (66.2MB/s), 11.9MiB/s-23.5MiB/s (12.5MB/s-24.6MB/s), io=63.5MiB (66.6MB), run=1001-1005msec 01:02:44.459 01:02:44.459 Disk stats (read/write): 01:02:44.459 nvme0n1: ios=2499/2560, merge=0/0, ticks=16844/16065, in_queue=32909, util=89.07% 01:02:44.459 nvme0n2: ios=2327/2560, merge=0/0, ticks=15995/16394, in_queue=32389, util=89.12% 01:02:44.459 nvme0n3: ios=5161/5203, merge=0/0, ticks=12745/10411, in_queue=23156, util=90.66% 01:02:44.459 nvme0n4: ios=3102/3538, merge=0/0, ticks=11882/13173, in_queue=25055, util=90.12% 01:02:44.459 10:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 01:02:44.459 10:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=107826 01:02:44.459 10:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 01:02:44.459 10:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 01:02:44.459 [global] 01:02:44.459 thread=1 01:02:44.459 invalidate=1 01:02:44.459 rw=read 01:02:44.459 time_based=1 01:02:44.459 runtime=10 01:02:44.459 ioengine=libaio 01:02:44.459 direct=1 01:02:44.459 bs=4096 01:02:44.459 iodepth=1 01:02:44.459 norandommap=1 01:02:44.459 numjobs=1 01:02:44.459 01:02:44.459 [job0] 01:02:44.459 filename=/dev/nvme0n1 01:02:44.459 [job1] 01:02:44.459 filename=/dev/nvme0n2 01:02:44.459 [job2] 01:02:44.459 filename=/dev/nvme0n3 01:02:44.459 [job3] 01:02:44.459 filename=/dev/nvme0n4 01:02:44.459 Could not set queue depth (nvme0n1) 01:02:44.459 Could not set queue depth (nvme0n2) 01:02:44.459 Could not set queue depth (nvme0n3) 01:02:44.459 Could not set queue depth (nvme0n4) 01:02:44.459 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:44.459 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:44.459 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:44.459 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:02:44.459 fio-3.35 01:02:44.459 Starting 4 threads 01:02:47.766 10:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 01:02:47.766 fio: pid=107869, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:02:47.766 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33546240, buflen=4096 01:02:47.766 10:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 01:02:47.766 fio: pid=107868, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:02:47.766 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=36257792, buflen=4096 01:02:47.766 10:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:02:47.766 10:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 01:02:48.025 fio: pid=107866, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:02:48.025 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=53985280, buflen=4096 01:02:48.025 10:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:02:48.025 10:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 01:02:48.025 fio: pid=107867, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:02:48.025 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=49057792, buflen=4096 01:02:48.284 01:02:48.284 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107866: Mon Dec 9 10:13:55 2024 01:02:48.284 read: IOPS=4066, BW=15.9MiB/s (16.7MB/s)(51.5MiB/3241msec) 01:02:48.284 slat (usec): min=5, max=13839, avg=12.29, stdev=196.65 01:02:48.284 clat (usec): min=112, max=160513, avg=232.80, stdev=1397.62 01:02:48.284 lat (usec): min=123, max=160520, avg=245.09, stdev=1411.58 01:02:48.284 clat percentiles (usec): 01:02:48.284 | 1.00th=[ 137], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 178], 01:02:48.284 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 208], 60.00th=[ 225], 01:02:48.284 | 70.00th=[ 239], 80.00th=[ 255], 90.00th=[ 289], 95.00th=[ 318], 01:02:48.284 | 99.00th=[ 408], 99.50th=[ 445], 99.90th=[ 562], 99.95th=[ 627], 01:02:48.284 | 99.99th=[ 2474] 01:02:48.284 bw ( KiB/s): min=10407, max=18816, per=33.72%, avg=16410.00, stdev=3638.36, samples=6 01:02:48.284 iops : min= 2601, max= 4704, avg=4102.17, stdev=909.69, samples=6 01:02:48.284 lat (usec) : 250=77.60%, 500=22.15%, 750=0.20%, 1000=0.01% 01:02:48.284 lat (msec) : 2=0.01%, 4=0.02%, 250=0.01% 01:02:48.284 cpu : usr=0.46%, sys=2.93%, ctx=13210, majf=0, minf=1 01:02:48.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:48.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:48.284 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:48.284 issued rwts: total=13181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:48.284 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:48.284 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107867: Mon Dec 9 10:13:55 2024 01:02:48.284 read: IOPS=3452, BW=13.5MiB/s (14.1MB/s)(46.8MiB/3469msec) 01:02:48.284 slat (usec): min=4, max=10786, avg=16.81, stdev=175.31 01:02:48.284 clat (usec): min=104, max=39420, avg=271.95, stdev=366.97 01:02:48.284 lat (usec): min=114, max=39432, avg=288.76, stdev=407.11 01:02:48.284 clat percentiles (usec): 01:02:48.284 | 1.00th=[ 117], 5.00th=[ 125], 10.00th=[ 139], 20.00th=[ 227], 01:02:48.284 | 30.00th=[ 262], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 01:02:48.284 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 326], 95.00th=[ 334], 01:02:48.284 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 668], 99.95th=[ 2245], 01:02:48.284 | 99.99th=[ 3326] 01:02:48.284 bw ( KiB/s): min=12536, max=14503, per=26.60%, avg=12941.83, stdev=777.39, samples=6 01:02:48.284 iops : min= 3134, max= 3625, avg=3235.17, stdev=194.14, samples=6 01:02:48.284 lat (usec) : 250=26.60%, 500=73.16%, 750=0.15%, 1000=0.03% 01:02:48.284 lat (msec) : 2=0.01%, 4=0.04%, 50=0.01% 01:02:48.284 cpu : usr=0.35%, sys=3.17%, ctx=12012, majf=0, minf=1 01:02:48.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:48.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:48.284 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:48.284 issued rwts: total=11978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:48.284 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:48.284 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107868: Mon Dec 9 10:13:55 2024 01:02:48.284 read: IOPS=2900, BW=11.3MiB/s (11.9MB/s)(34.6MiB/3052msec) 01:02:48.284 slat (usec): min=4, max=9816, avg=13.77, stdev=143.81 01:02:48.284 clat (usec): min=145, max=5102, avg=329.51, stdev=109.77 01:02:48.284 lat (usec): min=153, max=10186, avg=343.28, stdev=182.03 01:02:48.284 clat percentiles (usec): 01:02:48.284 | 1.00th=[ 174], 5.00th=[ 204], 10.00th=[ 249], 20.00th=[ 269], 01:02:48.284 | 30.00th=[ 281], 40.00th=[ 297], 50.00th=[ 322], 60.00th=[ 338], 01:02:48.284 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 441], 95.00th=[ 486], 01:02:48.284 | 99.00th=[ 586], 99.50th=[ 652], 99.90th=[ 734], 99.95th=[ 1827], 01:02:48.284 | 99.99th=[ 5080] 01:02:48.284 bw ( KiB/s): min=11968, max=12896, per=25.36%, avg=12339.00, stdev=343.86, samples=5 01:02:48.284 iops : min= 2992, max= 3224, avg=3084.60, stdev=86.01, samples=5 01:02:48.284 lat (usec) : 250=10.47%, 500=85.94%, 750=3.50%, 1000=0.01% 01:02:48.284 lat (msec) : 2=0.02%, 4=0.03%, 10=0.01% 01:02:48.284 cpu : usr=0.56%, sys=3.08%, ctx=8857, majf=0, minf=2 01:02:48.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:48.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:48.284 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:48.284 issued rwts: total=8853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:48.284 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:48.284 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107869: Mon Dec 9 10:13:55 2024 01:02:48.284 read: IOPS=2884, BW=11.3MiB/s (11.8MB/s)(32.0MiB/2840msec) 01:02:48.284 slat (nsec): min=4480, max=88609, avg=10580.13, stdev=6215.65 01:02:48.284 clat (usec): min=147, max=7717, avg=334.68, stdev=123.15 01:02:48.284 lat (usec): min=154, max=7740, avg=345.26, stdev=124.11 01:02:48.284 clat percentiles (usec): 01:02:48.284 | 1.00th=[ 172], 5.00th=[ 212], 10.00th=[ 253], 20.00th=[ 273], 01:02:48.284 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 326], 60.00th=[ 343], 01:02:48.284 | 70.00th=[ 355], 80.00th=[ 383], 90.00th=[ 453], 95.00th=[ 490], 01:02:48.284 | 99.00th=[ 586], 99.50th=[ 652], 99.90th=[ 717], 99.95th=[ 1762], 01:02:48.284 | 99.99th=[ 7701] 01:02:48.285 bw ( KiB/s): min=11280, max=12352, per=24.69%, avg=12014.20, stdev=433.06, samples=5 01:02:48.285 iops : min= 2820, max= 3088, avg=3003.40, stdev=108.16, samples=5 01:02:48.285 lat (usec) : 250=8.95%, 500=86.95%, 750=4.02%, 1000=0.01% 01:02:48.285 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 01:02:48.285 cpu : usr=0.63%, sys=2.75%, ctx=8193, majf=0, minf=2 01:02:48.285 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:02:48.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:48.285 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:02:48.285 issued rwts: total=8191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:02:48.285 latency : target=0, window=0, percentile=100.00%, depth=1 01:02:48.285 01:02:48.285 Run status group 0 (all jobs): 01:02:48.285 READ: bw=47.5MiB/s (49.8MB/s), 11.3MiB/s-15.9MiB/s (11.8MB/s-16.7MB/s), io=165MiB (173MB), run=2840-3469msec 01:02:48.285 01:02:48.285 Disk stats (read/write): 01:02:48.285 nvme0n1: ios=12650/0, merge=0/0, ticks=3002/0, in_queue=3002, util=95.31% 01:02:48.285 nvme0n2: ios=11379/0, merge=0/0, ticks=3228/0, in_queue=3228, util=95.90% 01:02:48.285 nvme0n3: ios=8482/0, merge=0/0, ticks=2752/0, in_queue=2752, util=96.21% 01:02:48.285 nvme0n4: ios=7796/0, merge=0/0, ticks=2509/0, in_queue=2509, util=96.26% 01:02:48.285 10:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:02:48.285 10:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 01:02:48.545 10:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:02:48.545 10:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 01:02:48.803 10:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:02:48.803 10:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 01:02:48.803 10:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:02:48.803 10:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 01:02:49.062 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:02:49.062 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 107826 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:02:49.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:02:49.322 nvmf hotplug test: fio failed as expected 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 01:02:49.322 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:02:49.582 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 01:02:49.582 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 01:02:49.582 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 01:02:49.582 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 01:02:49.582 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 01:02:49.582 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:02:49.582 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 01:02:49.582 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:02:49.582 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 01:02:49.582 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:02:49.582 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:02:49.582 rmmod nvme_tcp 01:02:49.842 rmmod nvme_fabrics 01:02:49.842 rmmod nvme_keyring 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 107345 ']' 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 107345 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 107345 ']' 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 107345 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107345 01:02:49.842 killing process with pid 107345 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107345' 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 107345 01:02:49.842 10:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 107345 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:02:50.102 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:02:50.362 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:02:50.362 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:02:50.362 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:50.362 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:50.362 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:02:50.362 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:50.362 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:50.362 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:50.362 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 01:02:50.362 01:02:50.362 real 0m19.070s 01:02:50.362 user 0m58.780s 01:02:50.362 sys 0m8.567s 01:02:50.362 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:02:50.362 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:02:50.362 ************************************ 01:02:50.363 END TEST nvmf_fio_target 01:02:50.363 ************************************ 01:02:50.363 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 01:02:50.363 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:02:50.363 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:02:50.363 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:02:50.363 ************************************ 01:02:50.363 START TEST nvmf_bdevio 01:02:50.363 ************************************ 01:02:50.363 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 01:02:50.623 * Looking for test storage... 01:02:50.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:02:50.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:50.623 --rc genhtml_branch_coverage=1 01:02:50.623 --rc genhtml_function_coverage=1 01:02:50.623 --rc genhtml_legend=1 01:02:50.623 --rc geninfo_all_blocks=1 01:02:50.623 --rc geninfo_unexecuted_blocks=1 01:02:50.623 01:02:50.623 ' 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:02:50.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:50.623 --rc genhtml_branch_coverage=1 01:02:50.623 --rc genhtml_function_coverage=1 01:02:50.623 --rc genhtml_legend=1 01:02:50.623 --rc geninfo_all_blocks=1 01:02:50.623 --rc geninfo_unexecuted_blocks=1 01:02:50.623 01:02:50.623 ' 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:02:50.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:50.623 --rc genhtml_branch_coverage=1 01:02:50.623 --rc genhtml_function_coverage=1 01:02:50.623 --rc genhtml_legend=1 01:02:50.623 --rc geninfo_all_blocks=1 01:02:50.623 --rc geninfo_unexecuted_blocks=1 01:02:50.623 01:02:50.623 ' 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:02:50.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:50.623 --rc genhtml_branch_coverage=1 01:02:50.623 --rc genhtml_function_coverage=1 01:02:50.623 --rc genhtml_legend=1 01:02:50.623 --rc geninfo_all_blocks=1 01:02:50.623 --rc geninfo_unexecuted_blocks=1 01:02:50.623 01:02:50.623 ' 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:50.623 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:02:50.624 Cannot find device "nvmf_init_br" 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:02:50.624 Cannot find device "nvmf_init_br2" 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 01:02:50.624 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:02:50.884 Cannot find device "nvmf_tgt_br" 01:02:50.884 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 01:02:50.884 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:02:50.884 Cannot find device "nvmf_tgt_br2" 01:02:50.884 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:02:50.885 Cannot find device "nvmf_init_br" 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:02:50.885 Cannot find device "nvmf_init_br2" 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:02:50.885 Cannot find device "nvmf_tgt_br" 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:02:50.885 Cannot find device "nvmf_tgt_br2" 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:02:50.885 Cannot find device "nvmf_br" 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:02:50.885 Cannot find device "nvmf_init_if" 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:02:50.885 Cannot find device "nvmf_init_if2" 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:50.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:50.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:02:50.885 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:51.144 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:51.144 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:51.144 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:02:51.144 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:02:51.145 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:51.145 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 01:02:51.145 01:02:51.145 --- 10.0.0.3 ping statistics --- 01:02:51.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:51.145 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:02:51.145 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:02:51.145 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 01:02:51.145 01:02:51.145 --- 10.0.0.4 ping statistics --- 01:02:51.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:51.145 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:51.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:51.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:02:51.145 01:02:51.145 --- 10.0.0.1 ping statistics --- 01:02:51.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:51.145 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:02:51.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:51.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 01:02:51.145 01:02:51.145 --- 10.0.0.2 ping statistics --- 01:02:51.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:51.145 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=108248 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 108248 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 108248 ']' 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:51.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:02:51.145 10:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 01:02:51.145 [2024-12-09 10:13:58.031855] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:02:51.145 [2024-12-09 10:13:58.032688] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:02:51.145 [2024-12-09 10:13:58.032734] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:51.145 [2024-12-09 10:13:58.170573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:02:51.404 [2024-12-09 10:13:58.214770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:51.404 [2024-12-09 10:13:58.214814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:51.404 [2024-12-09 10:13:58.214821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:51.404 [2024-12-09 10:13:58.214825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:51.404 [2024-12-09 10:13:58.214829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:51.404 [2024-12-09 10:13:58.215697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:02:51.404 [2024-12-09 10:13:58.215899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:02:51.404 [2024-12-09 10:13:58.216011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:02:51.404 [2024-12-09 10:13:58.216013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:02:51.404 [2024-12-09 10:13:58.286523] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:02:51.404 [2024-12-09 10:13:58.286746] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:02:51.404 [2024-12-09 10:13:58.287490] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:02:51.404 [2024-12-09 10:13:58.287932] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:02:51.404 [2024-12-09 10:13:58.288054] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:02:51.971 [2024-12-09 10:13:58.973010] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:51.971 10:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:02:52.230 Malloc0 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:02:52.230 [2024-12-09 10:13:59.040884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:02:52.230 { 01:02:52.230 "params": { 01:02:52.230 "name": "Nvme$subsystem", 01:02:52.230 "trtype": "$TEST_TRANSPORT", 01:02:52.230 "traddr": "$NVMF_FIRST_TARGET_IP", 01:02:52.230 "adrfam": "ipv4", 01:02:52.230 "trsvcid": "$NVMF_PORT", 01:02:52.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:02:52.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:02:52.230 "hdgst": ${hdgst:-false}, 01:02:52.230 "ddgst": ${ddgst:-false} 01:02:52.230 }, 01:02:52.230 "method": "bdev_nvme_attach_controller" 01:02:52.230 } 01:02:52.230 EOF 01:02:52.230 )") 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 01:02:52.230 10:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:02:52.230 "params": { 01:02:52.230 "name": "Nvme1", 01:02:52.230 "trtype": "tcp", 01:02:52.230 "traddr": "10.0.0.3", 01:02:52.230 "adrfam": "ipv4", 01:02:52.230 "trsvcid": "4420", 01:02:52.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:52.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:52.230 "hdgst": false, 01:02:52.230 "ddgst": false 01:02:52.230 }, 01:02:52.230 "method": "bdev_nvme_attach_controller" 01:02:52.230 }' 01:02:52.230 [2024-12-09 10:13:59.099468] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:02:52.230 [2024-12-09 10:13:59.099883] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108299 ] 01:02:52.230 [2024-12-09 10:13:59.247307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:02:52.489 [2024-12-09 10:13:59.320335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:52.489 [2024-12-09 10:13:59.320430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:52.489 [2024-12-09 10:13:59.320431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:52.747 I/O targets: 01:02:52.747 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:02:52.747 01:02:52.747 01:02:52.747 CUnit - A unit testing framework for C - Version 2.1-3 01:02:52.747 http://cunit.sourceforge.net/ 01:02:52.747 01:02:52.747 01:02:52.747 Suite: bdevio tests on: Nvme1n1 01:02:52.747 Test: blockdev write read block ...passed 01:02:52.747 Test: blockdev write zeroes read block ...passed 01:02:52.747 Test: blockdev write zeroes read no split ...passed 01:02:52.747 Test: blockdev write zeroes read split ...passed 01:02:52.747 Test: blockdev write zeroes read split partial ...passed 01:02:52.747 Test: blockdev reset ...[2024-12-09 10:13:59.643024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:02:52.747 [2024-12-09 10:13:59.643124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1689f50 (9): Bad file descriptor 01:02:52.747 passed 01:02:52.747 Test: blockdev write read 8 blocks ...[2024-12-09 10:13:59.645827] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 01:02:52.747 passed 01:02:52.747 Test: blockdev write read size > 128k ...passed 01:02:52.747 Test: blockdev write read invalid size ...passed 01:02:52.747 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:02:52.747 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:02:52.747 Test: blockdev write read max offset ...passed 01:02:52.747 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:02:52.747 Test: blockdev writev readv 8 blocks ...passed 01:02:52.747 Test: blockdev writev readv 30 x 1block ...passed 01:02:53.006 Test: blockdev writev readv block ...passed 01:02:53.006 Test: blockdev writev readv size > 128k ...passed 01:02:53.006 Test: blockdev writev readv size > 128k in two iovs ...passed 01:02:53.006 Test: blockdev comparev and writev ...[2024-12-09 10:13:59.818441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:02:53.006 [2024-12-09 10:13:59.818492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:02:53.006 [2024-12-09 10:13:59.818505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:02:53.006 [2024-12-09 10:13:59.818529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:02:53.006 [2024-12-09 10:13:59.818822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:02:53.006 [2024-12-09 10:13:59.818837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:02:53.006 [2024-12-09 10:13:59.818848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:02:53.006 [2024-12-09 10:13:59.818854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:02:53.006 [2024-12-09 10:13:59.819138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:02:53.006 [2024-12-09 10:13:59.819168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:02:53.006 [2024-12-09 10:13:59.819179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:02:53.006 [2024-12-09 10:13:59.819185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:02:53.006 [2024-12-09 10:13:59.819454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:02:53.006 [2024-12-09 10:13:59.819469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:02:53.006 [2024-12-09 10:13:59.819487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:02:53.006 [2024-12-09 10:13:59.819494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:02:53.006 passed 01:02:53.006 Test: blockdev nvme passthru rw ...passed 01:02:53.006 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:13:59.901835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:02:53.006 [2024-12-09 10:13:59.901858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:02:53.006 [2024-12-09 10:13:59.901964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:02:53.006 [2024-12-09 10:13:59.901972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:02:53.006 [2024-12-09 10:13:59.902073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:02:53.006 [2024-12-09 10:13:59.902087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:02:53.006 [2024-12-09 10:13:59.902171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:02:53.006 [2024-12-09 10:13:59.902181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:02:53.006 passed 01:02:53.006 Test: blockdev nvme admin passthru ...passed 01:02:53.006 Test: blockdev copy ...passed 01:02:53.006 01:02:53.006 Run Summary: Type Total Ran Passed Failed Inactive 01:02:53.006 suites 1 1 n/a 0 0 01:02:53.006 tests 23 23 23 0 0 01:02:53.006 asserts 152 152 152 0 n/a 01:02:53.006 01:02:53.006 Elapsed time = 0.854 seconds 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:02:53.265 rmmod nvme_tcp 01:02:53.265 rmmod nvme_fabrics 01:02:53.265 rmmod nvme_keyring 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 108248 ']' 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 108248 01:02:53.265 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 108248 ']' 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 108248 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108248 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 01:02:53.524 killing process with pid 108248 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108248' 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 108248 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 108248 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:02:53.524 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 01:02:53.783 01:02:53.783 real 0m3.440s 01:02:53.783 user 0m7.746s 01:02:53.783 sys 0m1.311s 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:02:53.783 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:02:53.783 ************************************ 01:02:53.783 END TEST nvmf_bdevio 01:02:53.783 ************************************ 01:02:54.041 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:02:54.041 01:02:54.041 real 3m32.578s 01:02:54.041 user 9m20.932s 01:02:54.041 sys 1m14.700s 01:02:54.041 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 01:02:54.041 10:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:02:54.041 ************************************ 01:02:54.041 END TEST nvmf_target_core_interrupt_mode 01:02:54.041 ************************************ 01:02:54.041 10:14:00 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 01:02:54.041 10:14:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:02:54.041 10:14:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:02:54.041 10:14:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:02:54.041 ************************************ 01:02:54.041 START TEST nvmf_interrupt 01:02:54.041 ************************************ 01:02:54.041 10:14:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 01:02:54.041 * Looking for test storage... 01:02:54.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:54.041 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:02:54.041 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:02:54.041 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:02:54.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:54.301 --rc genhtml_branch_coverage=1 01:02:54.301 --rc genhtml_function_coverage=1 01:02:54.301 --rc genhtml_legend=1 01:02:54.301 --rc geninfo_all_blocks=1 01:02:54.301 --rc geninfo_unexecuted_blocks=1 01:02:54.301 01:02:54.301 ' 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:02:54.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:54.301 --rc genhtml_branch_coverage=1 01:02:54.301 --rc genhtml_function_coverage=1 01:02:54.301 --rc genhtml_legend=1 01:02:54.301 --rc geninfo_all_blocks=1 01:02:54.301 --rc geninfo_unexecuted_blocks=1 01:02:54.301 01:02:54.301 ' 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:02:54.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:54.301 --rc genhtml_branch_coverage=1 01:02:54.301 --rc genhtml_function_coverage=1 01:02:54.301 --rc genhtml_legend=1 01:02:54.301 --rc geninfo_all_blocks=1 01:02:54.301 --rc geninfo_unexecuted_blocks=1 01:02:54.301 01:02:54.301 ' 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:02:54.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:54.301 --rc genhtml_branch_coverage=1 01:02:54.301 --rc genhtml_function_coverage=1 01:02:54.301 --rc genhtml_legend=1 01:02:54.301 --rc geninfo_all_blocks=1 01:02:54.301 --rc geninfo_unexecuted_blocks=1 01:02:54.301 01:02:54.301 ' 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:02:54.301 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:02:54.302 Cannot find device "nvmf_init_br" 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:02:54.302 Cannot find device "nvmf_init_br2" 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:02:54.302 Cannot find device "nvmf_tgt_br" 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:02:54.302 Cannot find device "nvmf_tgt_br2" 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:02:54.302 Cannot find device "nvmf_init_br" 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:02:54.302 Cannot find device "nvmf_init_br2" 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:02:54.302 Cannot find device "nvmf_tgt_br" 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:02:54.302 Cannot find device "nvmf_tgt_br2" 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 01:02:54.302 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:02:54.560 Cannot find device "nvmf_br" 01:02:54.560 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 01:02:54.560 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:02:54.560 Cannot find device "nvmf_init_if" 01:02:54.560 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 01:02:54.560 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:02:54.560 Cannot find device "nvmf_init_if2" 01:02:54.560 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:54.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:54.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:54.561 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:54.818 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:54.818 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:02:54.818 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:02:54.818 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:02:54.818 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:54.818 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:02:54.818 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:02:54.818 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:54.818 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 01:02:54.818 01:02:54.818 --- 10.0.0.3 ping statistics --- 01:02:54.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:54.818 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 01:02:54.818 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:02:54.818 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:02:54.818 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 01:02:54.818 01:02:54.818 --- 10.0.0.4 ping statistics --- 01:02:54.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:54.818 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 01:02:54.818 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:54.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:54.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 01:02:54.818 01:02:54.818 --- 10.0.0.1 ping statistics --- 01:02:54.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:54.818 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:02:54.818 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:02:54.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:54.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 01:02:54.818 01:02:54.818 --- 10.0.0.2 ping statistics --- 01:02:54.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:54.818 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 01:02:54.818 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:54.818 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=108557 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 108557 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 108557 ']' 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:54.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:54.819 10:14:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:02:54.819 [2024-12-09 10:14:01.748169] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:02:54.819 [2024-12-09 10:14:01.748986] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:02:54.819 [2024-12-09 10:14:01.749038] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:55.078 [2024-12-09 10:14:01.896385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:02:55.078 [2024-12-09 10:14:01.957918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:55.078 [2024-12-09 10:14:01.957969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:55.078 [2024-12-09 10:14:01.957975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:55.078 [2024-12-09 10:14:01.957980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:55.078 [2024-12-09 10:14:01.957985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:55.078 [2024-12-09 10:14:01.959333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:55.078 [2024-12-09 10:14:01.959339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:55.078 [2024-12-09 10:14:02.084997] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:02:55.078 [2024-12-09 10:14:02.085513] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:02:55.078 [2024-12-09 10:14:02.085901] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 01:02:55.668 5000+0 records in 01:02:55.668 5000+0 records out 01:02:55.668 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0337045 s, 304 MB/s 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:55.668 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:02:55.949 AIO0 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:02:55.949 [2024-12-09 10:14:02.740522] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:02:55.949 [2024-12-09 10:14:02.778654] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 108557 0 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108557 0 idle 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108557 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108557 -w 256 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108557 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.31 reactor_0' 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108557 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.31 reactor_0 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 108557 1 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108557 1 idle 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108557 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:02:55.949 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:02:55.950 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:02:55.950 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:02:55.950 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108557 -w 256 01:02:55.950 10:14:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108561 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.00 reactor_1' 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108561 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.00 reactor_1 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=108625 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 108557 0 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 108557 0 busy 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108557 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108557 -w 256 01:02:56.210 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:02:56.470 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108557 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.31 reactor_0' 01:02:56.470 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108557 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.31 reactor_0 01:02:56.470 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:02:56.470 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:02:56.470 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:02:56.470 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:02:56.470 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 01:02:56.470 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 01:02:56.470 10:14:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 01:02:57.410 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 01:02:57.410 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:02:57.410 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108557 -w 256 01:02:57.410 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108557 root 20 0 64.2g 46208 33024 R 99.9 0.4 0:01.81 reactor_0' 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108557 root 20 0 64.2g 46208 33024 R 99.9 0.4 0:01.81 reactor_0 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 108557 1 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 108557 1 busy 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108557 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108557 -w 256 01:02:57.670 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:02:57.930 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108561 root 20 0 64.2g 46208 33024 R 73.3 0.4 0:00.91 reactor_1' 01:02:57.930 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108561 root 20 0 64.2g 46208 33024 R 73.3 0.4 0:00.91 reactor_1 01:02:57.930 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:02:57.930 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:02:57.930 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 01:02:57.930 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 01:02:57.930 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 01:02:57.930 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 01:02:57.930 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 01:02:57.930 10:14:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:02:57.930 10:14:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 108625 01:03:07.918 Initializing NVMe Controllers 01:03:07.918 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:03:07.918 Controller IO queue size 256, less than required. 01:03:07.918 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:07.918 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:03:07.918 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:03:07.918 Initialization complete. Launching workers. 01:03:07.918 ======================================================== 01:03:07.918 Latency(us) 01:03:07.918 Device Information : IOPS MiB/s Average min max 01:03:07.918 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 7459.50 29.14 34370.59 11805.42 84575.53 01:03:07.918 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 5121.20 20.00 50080.70 9146.97 104367.16 01:03:07.918 ======================================================== 01:03:07.918 Total : 12580.70 49.14 40765.67 9146.97 104367.16 01:03:07.918 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 108557 0 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108557 0 idle 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108557 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108557 -w 256 01:03:07.918 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108557 root 20 0 64.2g 46208 33024 S 0.0 0.4 0:15.13 reactor_0' 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108557 root 20 0 64.2g 46208 33024 S 0.0 0.4 0:15.13 reactor_0 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 108557 1 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108557 1 idle 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108557 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108557 -w 256 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108561 root 20 0 64.2g 46208 33024 S 0.0 0.4 0:07.53 reactor_1' 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108561 root 20 0 64.2g 46208 33024 S 0.0 0.4 0:07.53 reactor_1 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:03:07.919 10:14:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 01:03:08.866 10:14:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:03:08.866 10:14:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:03:08.866 10:14:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 108557 0 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108557 0 idle 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108557 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:03:09.126 10:14:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108557 -w 256 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108557 root 20 0 64.2g 48512 33024 S 0.0 0.4 0:15.19 reactor_0' 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108557 root 20 0 64.2g 48512 33024 S 0.0 0.4 0:15.19 reactor_0 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 108557 1 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108557 1 idle 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108557 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108557 -w 256 01:03:09.126 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:03:09.385 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108561 root 20 0 64.2g 48512 33024 S 0.0 0.4 0:07.55 reactor_1' 01:03:09.385 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108561 root 20 0 64.2g 48512 33024 S 0.0 0.4 0:07.55 reactor_1 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:03:09.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:09.386 10:14:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 01:03:09.970 10:14:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:09.970 10:14:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:09.971 rmmod nvme_tcp 01:03:09.971 rmmod nvme_fabrics 01:03:09.971 rmmod nvme_keyring 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 108557 ']' 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 108557 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 108557 ']' 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 108557 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108557 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:09.971 killing process with pid 108557 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108557' 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 108557 01:03:09.971 10:14:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 108557 01:03:10.230 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:10.230 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:10.230 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:10.230 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 01:03:10.230 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:10.230 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 01:03:10.230 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 01:03:10.230 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:10.230 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:10.230 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:10.230 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:10.230 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 01:03:10.490 01:03:10.490 real 0m16.544s 01:03:10.490 user 0m31.664s 01:03:10.490 sys 0m5.809s 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:10.490 10:14:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:03:10.490 ************************************ 01:03:10.490 END TEST nvmf_interrupt 01:03:10.490 ************************************ 01:03:10.750 01:03:10.750 real 20m18.674s 01:03:10.750 user 52m50.566s 01:03:10.750 sys 4m50.313s 01:03:10.750 10:14:17 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:10.750 10:14:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:10.750 ************************************ 01:03:10.750 END TEST nvmf_tcp 01:03:10.750 ************************************ 01:03:10.750 10:14:17 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 01:03:10.750 10:14:17 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 01:03:10.750 10:14:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:10.750 10:14:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:10.750 10:14:17 -- common/autotest_common.sh@10 -- # set +x 01:03:10.750 ************************************ 01:03:10.750 START TEST spdkcli_nvmf_tcp 01:03:10.750 ************************************ 01:03:10.750 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 01:03:10.750 * Looking for test storage... 01:03:10.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 01:03:10.750 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:10.750 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 01:03:10.750 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:11.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:11.010 --rc genhtml_branch_coverage=1 01:03:11.010 --rc genhtml_function_coverage=1 01:03:11.010 --rc genhtml_legend=1 01:03:11.010 --rc geninfo_all_blocks=1 01:03:11.010 --rc geninfo_unexecuted_blocks=1 01:03:11.010 01:03:11.010 ' 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:11.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:11.010 --rc genhtml_branch_coverage=1 01:03:11.010 --rc genhtml_function_coverage=1 01:03:11.010 --rc genhtml_legend=1 01:03:11.010 --rc geninfo_all_blocks=1 01:03:11.010 --rc geninfo_unexecuted_blocks=1 01:03:11.010 01:03:11.010 ' 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:11.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:11.010 --rc genhtml_branch_coverage=1 01:03:11.010 --rc genhtml_function_coverage=1 01:03:11.010 --rc genhtml_legend=1 01:03:11.010 --rc geninfo_all_blocks=1 01:03:11.010 --rc geninfo_unexecuted_blocks=1 01:03:11.010 01:03:11.010 ' 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:11.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:11.010 --rc genhtml_branch_coverage=1 01:03:11.010 --rc genhtml_function_coverage=1 01:03:11.010 --rc genhtml_legend=1 01:03:11.010 --rc geninfo_all_blocks=1 01:03:11.010 --rc geninfo_unexecuted_blocks=1 01:03:11.010 01:03:11.010 ' 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:11.010 10:14:17 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:11.011 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=108976 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 108976 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 108976 ']' 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:11.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:11.011 10:14:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:11.011 [2024-12-09 10:14:17.929681] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:03:11.011 [2024-12-09 10:14:17.929774] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108976 ] 01:03:11.270 [2024-12-09 10:14:18.075031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:03:11.270 [2024-12-09 10:14:18.152566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:11.270 [2024-12-09 10:14:18.152579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:11.839 10:14:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:11.839 10:14:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 01:03:11.839 10:14:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 01:03:11.839 10:14:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:11.839 10:14:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:11.839 10:14:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 01:03:11.839 10:14:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 01:03:11.839 10:14:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 01:03:11.839 10:14:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:11.839 10:14:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:11.839 10:14:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 01:03:11.839 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 01:03:11.839 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 01:03:11.839 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 01:03:11.839 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 01:03:11.839 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 01:03:11.839 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 01:03:11.839 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:03:11.839 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:03:11.839 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 01:03:11.839 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 01:03:11.839 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 01:03:11.839 ' 01:03:15.134 [2024-12-09 10:14:21.671198] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:16.073 [2024-12-09 10:14:23.033848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 01:03:18.608 [2024-12-09 10:14:25.575398] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 01:03:21.145 [2024-12-09 10:14:27.781379] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 01:03:22.526 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 01:03:22.526 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 01:03:22.526 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 01:03:22.526 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 01:03:22.526 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 01:03:22.526 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 01:03:22.526 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 01:03:22.526 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:03:22.526 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:03:22.526 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 01:03:22.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 01:03:22.526 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 01:03:22.526 10:14:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 01:03:22.526 10:14:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:22.526 10:14:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:22.786 10:14:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 01:03:22.786 10:14:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:22.786 10:14:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:22.786 10:14:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 01:03:22.786 10:14:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 01:03:23.045 10:14:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 01:03:23.045 10:14:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 01:03:23.045 10:14:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 01:03:23.045 10:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:23.045 10:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:23.305 10:14:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 01:03:23.305 10:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:23.305 10:14:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:23.305 10:14:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 01:03:23.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 01:03:23.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 01:03:23.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 01:03:23.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 01:03:23.305 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 01:03:23.305 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 01:03:23.305 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 01:03:23.305 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 01:03:23.305 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 01:03:23.305 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 01:03:23.305 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 01:03:23.305 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 01:03:23.305 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 01:03:23.305 ' 01:03:29.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 01:03:29.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 01:03:29.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 01:03:29.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 01:03:29.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 01:03:29.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 01:03:29.881 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 01:03:29.881 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 01:03:29.881 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 01:03:29.881 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 01:03:29.881 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 01:03:29.881 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 01:03:29.881 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 01:03:29.881 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 108976 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 108976 ']' 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 108976 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108976 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:29.881 killing process with pid 108976 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108976' 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 108976 01:03:29.881 10:14:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 108976 01:03:29.881 10:14:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 01:03:29.881 10:14:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 01:03:29.881 10:14:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 108976 ']' 01:03:29.881 10:14:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 108976 01:03:29.881 10:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 108976 ']' 01:03:29.881 10:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 108976 01:03:29.881 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (108976) - No such process 01:03:29.881 Process with pid 108976 is not found 01:03:29.881 10:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 108976 is not found' 01:03:29.881 10:14:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 01:03:29.881 10:14:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 01:03:29.881 10:14:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 01:03:29.881 01:03:29.881 real 0m18.633s 01:03:29.881 user 0m41.048s 01:03:29.881 sys 0m1.071s 01:03:29.881 10:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:29.881 10:14:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:29.881 ************************************ 01:03:29.881 END TEST spdkcli_nvmf_tcp 01:03:29.881 ************************************ 01:03:29.881 10:14:36 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 01:03:29.881 10:14:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:29.881 10:14:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:29.881 10:14:36 -- common/autotest_common.sh@10 -- # set +x 01:03:29.881 ************************************ 01:03:29.881 START TEST nvmf_identify_passthru 01:03:29.881 ************************************ 01:03:29.881 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 01:03:29.881 * Looking for test storage... 01:03:29.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:03:29.881 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:29.881 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 01:03:29.881 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:29.881 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:29.881 10:14:36 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:29.881 10:14:36 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:29.881 10:14:36 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:29.881 10:14:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 01:03:29.881 10:14:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 01:03:29.881 10:14:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 01:03:29.881 10:14:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 01:03:29.881 10:14:36 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 01:03:29.881 10:14:36 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 01:03:29.881 10:14:36 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 01:03:29.882 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:29.882 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:29.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:29.882 --rc genhtml_branch_coverage=1 01:03:29.882 --rc genhtml_function_coverage=1 01:03:29.882 --rc genhtml_legend=1 01:03:29.882 --rc geninfo_all_blocks=1 01:03:29.882 --rc geninfo_unexecuted_blocks=1 01:03:29.882 01:03:29.882 ' 01:03:29.882 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:29.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:29.882 --rc genhtml_branch_coverage=1 01:03:29.882 --rc genhtml_function_coverage=1 01:03:29.882 --rc genhtml_legend=1 01:03:29.882 --rc geninfo_all_blocks=1 01:03:29.882 --rc geninfo_unexecuted_blocks=1 01:03:29.882 01:03:29.882 ' 01:03:29.882 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:29.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:29.882 --rc genhtml_branch_coverage=1 01:03:29.882 --rc genhtml_function_coverage=1 01:03:29.882 --rc genhtml_legend=1 01:03:29.882 --rc geninfo_all_blocks=1 01:03:29.882 --rc geninfo_unexecuted_blocks=1 01:03:29.882 01:03:29.882 ' 01:03:29.882 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:29.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:29.882 --rc genhtml_branch_coverage=1 01:03:29.882 --rc genhtml_function_coverage=1 01:03:29.882 --rc genhtml_legend=1 01:03:29.882 --rc geninfo_all_blocks=1 01:03:29.882 --rc geninfo_unexecuted_blocks=1 01:03:29.882 01:03:29.882 ' 01:03:29.882 10:14:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:29.882 10:14:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.882 10:14:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.882 10:14:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.882 10:14:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 01:03:29.882 10:14:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:29.882 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:29.882 10:14:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:29.882 10:14:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:29.882 10:14:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.882 10:14:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.882 10:14:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.882 10:14:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 01:03:29.882 10:14:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.882 10:14:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:29.882 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:03:29.882 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:29.882 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:29.883 Cannot find device "nvmf_init_br" 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:29.883 Cannot find device "nvmf_init_br2" 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:29.883 Cannot find device "nvmf_tgt_br" 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:29.883 Cannot find device "nvmf_tgt_br2" 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:29.883 Cannot find device "nvmf_init_br" 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:29.883 Cannot find device "nvmf_init_br2" 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:29.883 Cannot find device "nvmf_tgt_br" 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:29.883 Cannot find device "nvmf_tgt_br2" 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:29.883 Cannot find device "nvmf_br" 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:29.883 Cannot find device "nvmf_init_if" 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:29.883 Cannot find device "nvmf_init_if2" 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:29.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:29.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:29.883 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:30.143 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:30.143 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 01:03:30.143 01:03:30.143 --- 10.0.0.3 ping statistics --- 01:03:30.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:30.143 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:30.143 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:30.143 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.132 ms 01:03:30.143 01:03:30.143 --- 10.0.0.4 ping statistics --- 01:03:30.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:30.143 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:30.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:30.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.013 ms 01:03:30.143 01:03:30.143 --- 10.0.0.1 ping statistics --- 01:03:30.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:30.143 rtt min/avg/max/mdev = 0.013/0.013/0.013/0.000 ms 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:30.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:30.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 01:03:30.143 01:03:30.143 --- 10.0.0.2 ping statistics --- 01:03:30.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:30.143 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:30.143 10:14:36 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:30.143 10:14:36 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 01:03:30.143 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:30.143 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:30.143 10:14:36 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 01:03:30.143 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 01:03:30.143 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 01:03:30.143 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 01:03:30.143 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 01:03:30.143 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 01:03:30.143 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 01:03:30.143 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:03:30.143 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:03:30.143 10:14:36 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:03:30.143 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 01:03:30.143 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:03:30.143 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 01:03:30.143 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 01:03:30.143 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 01:03:30.143 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 01:03:30.143 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 01:03:30.143 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 01:03:30.403 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 01:03:30.403 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 01:03:30.403 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 01:03:30.403 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 01:03:30.662 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 01:03:30.662 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 01:03:30.662 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:30.662 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:30.662 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 01:03:30.662 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:30.662 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:30.662 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=109504 01:03:30.662 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 01:03:30.662 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:03:30.662 10:14:37 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 109504 01:03:30.662 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 109504 ']' 01:03:30.662 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:30.662 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:30.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:30.662 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:30.662 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:30.662 10:14:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:30.662 [2024-12-09 10:14:37.580677] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:03:30.662 [2024-12-09 10:14:37.580743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:30.922 [2024-12-09 10:14:37.733738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:30.922 [2024-12-09 10:14:37.785308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:30.922 [2024-12-09 10:14:37.785359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:30.922 [2024-12-09 10:14:37.785365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:30.922 [2024-12-09 10:14:37.785370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:30.922 [2024-12-09 10:14:37.785375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:30.922 [2024-12-09 10:14:37.786230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:30.922 [2024-12-09 10:14:37.786361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:03:30.922 [2024-12-09 10:14:37.786364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:30.922 [2024-12-09 10:14:37.786322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:03:31.492 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:31.492 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 01:03:31.492 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 01:03:31.492 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:31.492 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:31.492 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:31.492 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 01:03:31.492 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:31.492 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:31.492 [2024-12-09 10:14:38.528164] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:31.833 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:31.833 [2024-12-09 10:14:38.537579] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:31.833 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:31.833 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:31.833 Nvme0n1 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:31.833 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:31.833 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:31.833 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:31.833 [2024-12-09 10:14:38.699567] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:31.833 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:31.833 [ 01:03:31.833 { 01:03:31.833 "allow_any_host": true, 01:03:31.833 "hosts": [], 01:03:31.833 "listen_addresses": [], 01:03:31.833 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:03:31.833 "subtype": "Discovery" 01:03:31.833 }, 01:03:31.833 { 01:03:31.833 "allow_any_host": true, 01:03:31.833 "hosts": [], 01:03:31.833 "listen_addresses": [ 01:03:31.833 { 01:03:31.833 "adrfam": "IPv4", 01:03:31.833 "traddr": "10.0.0.3", 01:03:31.833 "trsvcid": "4420", 01:03:31.833 "trtype": "TCP" 01:03:31.833 } 01:03:31.833 ], 01:03:31.833 "max_cntlid": 65519, 01:03:31.833 "max_namespaces": 1, 01:03:31.833 "min_cntlid": 1, 01:03:31.833 "model_number": "SPDK bdev Controller", 01:03:31.833 "namespaces": [ 01:03:31.833 { 01:03:31.833 "bdev_name": "Nvme0n1", 01:03:31.833 "name": "Nvme0n1", 01:03:31.833 "nguid": "AD4A9B64AC104009BD8AC252A45010DF", 01:03:31.833 "nsid": 1, 01:03:31.833 "uuid": "ad4a9b64-ac10-4009-bd8a-c252a45010df" 01:03:31.833 } 01:03:31.833 ], 01:03:31.833 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:03:31.833 "serial_number": "SPDK00000000000001", 01:03:31.833 "subtype": "NVMe" 01:03:31.833 } 01:03:31.833 ] 01:03:31.833 10:14:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:31.833 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:03:31.833 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 01:03:31.833 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 01:03:32.105 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 01:03:32.105 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:03:32.105 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 01:03:32.105 10:14:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 01:03:32.365 10:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 01:03:32.365 10:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 01:03:32.365 10:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 01:03:32.365 10:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:03:32.365 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:32.365 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:32.365 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:32.365 10:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 01:03:32.365 10:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 01:03:32.365 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:32.365 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 01:03:32.365 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:32.365 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 01:03:32.365 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:32.365 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:32.365 rmmod nvme_tcp 01:03:32.365 rmmod nvme_fabrics 01:03:32.365 rmmod nvme_keyring 01:03:32.365 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:32.365 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 01:03:32.365 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 01:03:32.365 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 109504 ']' 01:03:32.365 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 109504 01:03:32.366 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 109504 ']' 01:03:32.366 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 109504 01:03:32.366 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 01:03:32.366 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:32.366 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109504 01:03:32.366 killing process with pid 109504 01:03:32.366 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:32.366 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:32.366 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109504' 01:03:32.366 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 109504 01:03:32.366 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 109504 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:32.626 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:32.886 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:32.886 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:32.886 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:32.886 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:32.886 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:32.886 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:32.886 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:03:32.886 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:32.886 10:14:39 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 01:03:32.886 01:03:32.886 real 0m3.533s 01:03:32.886 user 0m7.712s 01:03:32.886 sys 0m1.062s 01:03:32.886 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:32.886 10:14:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:03:32.886 ************************************ 01:03:32.886 END TEST nvmf_identify_passthru 01:03:32.886 ************************************ 01:03:32.886 10:14:39 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:03:32.886 10:14:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:03:32.886 10:14:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:32.886 10:14:39 -- common/autotest_common.sh@10 -- # set +x 01:03:32.886 ************************************ 01:03:32.886 START TEST nvmf_dif 01:03:32.886 ************************************ 01:03:32.886 10:14:39 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:03:33.146 * Looking for test storage... 01:03:33.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:03:33.146 10:14:40 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:33.147 10:14:40 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 01:03:33.147 10:14:40 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:33.147 10:14:40 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@345 -- # : 1 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@353 -- # local d=1 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@355 -- # echo 1 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@353 -- # local d=2 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@355 -- # echo 2 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@368 -- # return 0 01:03:33.147 10:14:40 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:33.147 10:14:40 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:33.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:33.147 --rc genhtml_branch_coverage=1 01:03:33.147 --rc genhtml_function_coverage=1 01:03:33.147 --rc genhtml_legend=1 01:03:33.147 --rc geninfo_all_blocks=1 01:03:33.147 --rc geninfo_unexecuted_blocks=1 01:03:33.147 01:03:33.147 ' 01:03:33.147 10:14:40 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:33.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:33.147 --rc genhtml_branch_coverage=1 01:03:33.147 --rc genhtml_function_coverage=1 01:03:33.147 --rc genhtml_legend=1 01:03:33.147 --rc geninfo_all_blocks=1 01:03:33.147 --rc geninfo_unexecuted_blocks=1 01:03:33.147 01:03:33.147 ' 01:03:33.147 10:14:40 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:33.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:33.147 --rc genhtml_branch_coverage=1 01:03:33.147 --rc genhtml_function_coverage=1 01:03:33.147 --rc genhtml_legend=1 01:03:33.147 --rc geninfo_all_blocks=1 01:03:33.147 --rc geninfo_unexecuted_blocks=1 01:03:33.147 01:03:33.147 ' 01:03:33.147 10:14:40 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:33.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:33.147 --rc genhtml_branch_coverage=1 01:03:33.147 --rc genhtml_function_coverage=1 01:03:33.147 --rc genhtml_legend=1 01:03:33.147 --rc geninfo_all_blocks=1 01:03:33.147 --rc geninfo_unexecuted_blocks=1 01:03:33.147 01:03:33.147 ' 01:03:33.147 10:14:40 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:33.147 10:14:40 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:33.147 10:14:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:33.147 10:14:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:33.147 10:14:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:33.147 10:14:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 01:03:33.147 10:14:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@51 -- # : 0 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:33.147 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:33.147 10:14:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 01:03:33.147 10:14:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 01:03:33.147 10:14:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 01:03:33.147 10:14:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 01:03:33.147 10:14:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:33.147 10:14:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:03:33.147 10:14:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:33.147 10:14:40 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:33.407 Cannot find device "nvmf_init_br" 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@162 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:33.407 Cannot find device "nvmf_init_br2" 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@163 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:33.407 Cannot find device "nvmf_tgt_br" 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@164 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:33.407 Cannot find device "nvmf_tgt_br2" 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@165 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:33.407 Cannot find device "nvmf_init_br" 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@166 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:33.407 Cannot find device "nvmf_init_br2" 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@167 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:33.407 Cannot find device "nvmf_tgt_br" 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@168 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:33.407 Cannot find device "nvmf_tgt_br2" 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@169 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:33.407 Cannot find device "nvmf_br" 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@170 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:33.407 Cannot find device "nvmf_init_if" 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@171 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:33.407 Cannot find device "nvmf_init_if2" 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@172 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:33.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@173 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:33.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@174 -- # true 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:33.407 10:14:40 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:33.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:33.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 01:03:33.667 01:03:33.667 --- 10.0.0.3 ping statistics --- 01:03:33.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:33.667 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:33.667 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:33.667 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 01:03:33.667 01:03:33.667 --- 10.0.0.4 ping statistics --- 01:03:33.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:33.667 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:33.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:33.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 01:03:33.667 01:03:33.667 --- 10.0.0.1 ping statistics --- 01:03:33.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:33.667 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:33.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:33.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 01:03:33.667 01:03:33.667 --- 10.0.0.2 ping statistics --- 01:03:33.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:33.667 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@461 -- # return 0 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:03:33.667 10:14:40 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:03:34.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:03:34.238 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:03:34.238 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:03:34.238 10:14:41 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:34.238 10:14:41 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:34.238 10:14:41 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:34.238 10:14:41 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:34.238 10:14:41 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:34.238 10:14:41 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:34.238 10:14:41 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 01:03:34.238 10:14:41 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 01:03:34.238 10:14:41 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:34.238 10:14:41 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:34.238 10:14:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:03:34.238 10:14:41 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:03:34.238 10:14:41 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=109903 01:03:34.238 10:14:41 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 109903 01:03:34.238 10:14:41 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 109903 ']' 01:03:34.238 10:14:41 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:34.238 10:14:41 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:34.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:34.238 10:14:41 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:34.238 10:14:41 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:34.238 10:14:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:03:34.238 [2024-12-09 10:14:41.199077] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:03:34.238 [2024-12-09 10:14:41.199133] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:34.498 [2024-12-09 10:14:41.347585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:34.498 [2024-12-09 10:14:41.393452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:34.498 [2024-12-09 10:14:41.393500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:34.498 [2024-12-09 10:14:41.393506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:34.498 [2024-12-09 10:14:41.393511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:34.498 [2024-12-09 10:14:41.393515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:34.498 [2024-12-09 10:14:41.393783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:35.067 10:14:42 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:35.067 10:14:42 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 01:03:35.067 10:14:42 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:35.067 10:14:42 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:35.067 10:14:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:03:35.328 10:14:42 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:35.328 10:14:42 nvmf_dif -- target/dif.sh@139 -- # create_transport 01:03:35.328 10:14:42 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 01:03:35.328 10:14:42 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.328 10:14:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:03:35.328 [2024-12-09 10:14:42.119999] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:35.328 10:14:42 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.328 10:14:42 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 01:03:35.328 10:14:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:03:35.328 10:14:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:35.328 10:14:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:03:35.328 ************************************ 01:03:35.328 START TEST fio_dif_1_default 01:03:35.328 ************************************ 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:03:35.328 bdev_null0 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:03:35.328 [2024-12-09 10:14:42.168002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:03:35.328 { 01:03:35.328 "params": { 01:03:35.328 "name": "Nvme$subsystem", 01:03:35.328 "trtype": "$TEST_TRANSPORT", 01:03:35.328 "traddr": "$NVMF_FIRST_TARGET_IP", 01:03:35.328 "adrfam": "ipv4", 01:03:35.328 "trsvcid": "$NVMF_PORT", 01:03:35.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:03:35.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:03:35.328 "hdgst": ${hdgst:-false}, 01:03:35.328 "ddgst": ${ddgst:-false} 01:03:35.328 }, 01:03:35.328 "method": "bdev_nvme_attach_controller" 01:03:35.328 } 01:03:35.328 EOF 01:03:35.328 )") 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 01:03:35.328 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:03:35.329 "params": { 01:03:35.329 "name": "Nvme0", 01:03:35.329 "trtype": "tcp", 01:03:35.329 "traddr": "10.0.0.3", 01:03:35.329 "adrfam": "ipv4", 01:03:35.329 "trsvcid": "4420", 01:03:35.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:35.329 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:03:35.329 "hdgst": false, 01:03:35.329 "ddgst": false 01:03:35.329 }, 01:03:35.329 "method": "bdev_nvme_attach_controller" 01:03:35.329 }' 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:03:35.329 10:14:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:03:35.588 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:03:35.588 fio-3.35 01:03:35.588 Starting 1 thread 01:03:47.812 01:03:47.812 filename0: (groupid=0, jobs=1): err= 0: pid=109993: Mon Dec 9 10:14:52 2024 01:03:47.812 read: IOPS=311, BW=1247KiB/s (1277kB/s)(12.2MiB/10023msec) 01:03:47.812 slat (nsec): min=5230, max=41349, avg=6025.47, stdev=2065.11 01:03:47.812 clat (usec): min=289, max=41677, avg=12816.40, stdev=18695.60 01:03:47.812 lat (usec): min=294, max=41683, avg=12822.42, stdev=18695.49 01:03:47.812 clat percentiles (usec): 01:03:47.812 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 314], 01:03:47.812 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 338], 01:03:47.812 | 70.00th=[40109], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 01:03:47.812 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 01:03:47.812 | 99.99th=[41681] 01:03:47.812 bw ( KiB/s): min= 960, max= 1600, per=100.00%, avg=1248.00, stdev=194.79, samples=20 01:03:47.812 iops : min= 240, max= 400, avg=312.00, stdev=48.70, samples=20 01:03:47.812 lat (usec) : 500=68.34%, 750=0.67% 01:03:47.812 lat (msec) : 4=0.13%, 50=30.86% 01:03:47.812 cpu : usr=93.05%, sys=6.61%, ctx=29, majf=0, minf=9 01:03:47.812 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:03:47.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:03:47.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:03:47.812 issued rwts: total=3124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:03:47.812 latency : target=0, window=0, percentile=100.00%, depth=4 01:03:47.812 01:03:47.812 Run status group 0 (all jobs): 01:03:47.812 READ: bw=1247KiB/s (1277kB/s), 1247KiB/s-1247KiB/s (1277kB/s-1277kB/s), io=12.2MiB (12.8MB), run=10023-10023msec 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:03:47.812 ************************************ 01:03:47.812 END TEST fio_dif_1_default 01:03:47.812 ************************************ 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.812 01:03:47.812 real 0m11.025s 01:03:47.812 user 0m9.996s 01:03:47.812 sys 0m0.955s 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:03:47.812 10:14:53 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 01:03:47.812 10:14:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:03:47.812 10:14:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:47.812 10:14:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:03:47.812 ************************************ 01:03:47.812 START TEST fio_dif_1_multi_subsystems 01:03:47.812 ************************************ 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:47.812 bdev_null0 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:47.812 [2024-12-09 10:14:53.273298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:47.812 bdev_null1 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:03:47.812 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:03:47.812 { 01:03:47.812 "params": { 01:03:47.812 "name": "Nvme$subsystem", 01:03:47.812 "trtype": "$TEST_TRANSPORT", 01:03:47.812 "traddr": "$NVMF_FIRST_TARGET_IP", 01:03:47.812 "adrfam": "ipv4", 01:03:47.812 "trsvcid": "$NVMF_PORT", 01:03:47.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:03:47.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:03:47.812 "hdgst": ${hdgst:-false}, 01:03:47.813 "ddgst": ${ddgst:-false} 01:03:47.813 }, 01:03:47.813 "method": "bdev_nvme_attach_controller" 01:03:47.813 } 01:03:47.813 EOF 01:03:47.813 )") 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:03:47.813 { 01:03:47.813 "params": { 01:03:47.813 "name": "Nvme$subsystem", 01:03:47.813 "trtype": "$TEST_TRANSPORT", 01:03:47.813 "traddr": "$NVMF_FIRST_TARGET_IP", 01:03:47.813 "adrfam": "ipv4", 01:03:47.813 "trsvcid": "$NVMF_PORT", 01:03:47.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:03:47.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:03:47.813 "hdgst": ${hdgst:-false}, 01:03:47.813 "ddgst": ${ddgst:-false} 01:03:47.813 }, 01:03:47.813 "method": "bdev_nvme_attach_controller" 01:03:47.813 } 01:03:47.813 EOF 01:03:47.813 )") 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:03:47.813 "params": { 01:03:47.813 "name": "Nvme0", 01:03:47.813 "trtype": "tcp", 01:03:47.813 "traddr": "10.0.0.3", 01:03:47.813 "adrfam": "ipv4", 01:03:47.813 "trsvcid": "4420", 01:03:47.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:47.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:03:47.813 "hdgst": false, 01:03:47.813 "ddgst": false 01:03:47.813 }, 01:03:47.813 "method": "bdev_nvme_attach_controller" 01:03:47.813 },{ 01:03:47.813 "params": { 01:03:47.813 "name": "Nvme1", 01:03:47.813 "trtype": "tcp", 01:03:47.813 "traddr": "10.0.0.3", 01:03:47.813 "adrfam": "ipv4", 01:03:47.813 "trsvcid": "4420", 01:03:47.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:03:47.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:03:47.813 "hdgst": false, 01:03:47.813 "ddgst": false 01:03:47.813 }, 01:03:47.813 "method": "bdev_nvme_attach_controller" 01:03:47.813 }' 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:03:47.813 10:14:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:03:47.813 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:03:47.813 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:03:47.813 fio-3.35 01:03:47.813 Starting 2 threads 01:03:57.871 01:03:57.871 filename0: (groupid=0, jobs=1): err= 0: pid=110154: Mon Dec 9 10:15:04 2024 01:03:57.871 read: IOPS=257, BW=1029KiB/s (1054kB/s)(10.1MiB/10014msec) 01:03:57.871 slat (nsec): min=5652, max=71140, avg=9061.05, stdev=5536.93 01:03:57.871 clat (usec): min=305, max=42466, avg=15521.79, stdev=19567.86 01:03:57.871 lat (usec): min=311, max=42473, avg=15530.85, stdev=19567.16 01:03:57.871 clat percentiles (usec): 01:03:57.871 | 1.00th=[ 318], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 363], 01:03:57.871 | 30.00th=[ 379], 40.00th=[ 424], 50.00th=[ 537], 60.00th=[ 693], 01:03:57.871 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 01:03:57.871 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 01:03:57.871 | 99.99th=[42206] 01:03:57.871 bw ( KiB/s): min= 544, max= 1920, per=36.85%, avg=1028.80, stdev=389.89, samples=20 01:03:57.871 iops : min= 136, max= 480, avg=257.20, stdev=97.47, samples=20 01:03:57.871 lat (usec) : 500=48.95%, 750=12.58%, 1000=0.27% 01:03:57.871 lat (msec) : 2=0.93%, 50=37.27% 01:03:57.871 cpu : usr=97.09%, sys=2.47%, ctx=20, majf=0, minf=9 01:03:57.871 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:03:57.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:03:57.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:03:57.871 issued rwts: total=2576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:03:57.871 latency : target=0, window=0, percentile=100.00%, depth=4 01:03:57.871 filename1: (groupid=0, jobs=1): err= 0: pid=110155: Mon Dec 9 10:15:04 2024 01:03:57.871 read: IOPS=440, BW=1762KiB/s (1804kB/s)(17.2MiB/10008msec) 01:03:57.871 slat (nsec): min=5393, max=46615, avg=7868.12, stdev=4447.34 01:03:57.871 clat (usec): min=303, max=42382, avg=9058.55, stdev=16593.95 01:03:57.871 lat (usec): min=309, max=42388, avg=9066.42, stdev=16593.70 01:03:57.871 clat percentiles (usec): 01:03:57.871 | 1.00th=[ 322], 5.00th=[ 355], 10.00th=[ 367], 20.00th=[ 379], 01:03:57.871 | 30.00th=[ 388], 40.00th=[ 396], 50.00th=[ 404], 60.00th=[ 416], 01:03:57.871 | 70.00th=[ 453], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 01:03:57.871 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 01:03:57.871 | 99.99th=[42206] 01:03:57.871 bw ( KiB/s): min= 576, max= 3552, per=60.97%, avg=1701.05, stdev=993.05, samples=19 01:03:57.871 iops : min= 144, max= 888, avg=425.26, stdev=248.26, samples=19 01:03:57.871 lat (usec) : 500=71.55%, 750=6.22%, 1000=0.27% 01:03:57.871 lat (msec) : 2=0.64%, 50=21.32% 01:03:57.871 cpu : usr=97.63%, sys=1.85%, ctx=111, majf=0, minf=0 01:03:57.871 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:03:57.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:03:57.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:03:57.871 issued rwts: total=4408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:03:57.871 latency : target=0, window=0, percentile=100.00%, depth=4 01:03:57.871 01:03:57.871 Run status group 0 (all jobs): 01:03:57.871 READ: bw=2790KiB/s (2857kB/s), 1029KiB/s-1762KiB/s (1054kB/s-1804kB/s), io=27.3MiB (28.6MB), run=10008-10014msec 01:03:57.871 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 01:03:57.871 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 01:03:57.871 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:03:57.871 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 01:03:57.871 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:57.872 ************************************ 01:03:57.872 END TEST fio_dif_1_multi_subsystems 01:03:57.872 ************************************ 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:57.872 01:03:57.872 real 0m11.337s 01:03:57.872 user 0m20.444s 01:03:57.872 sys 0m0.768s 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:57.872 10:15:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:03:57.872 10:15:04 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 01:03:57.872 10:15:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:03:57.872 10:15:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:57.872 10:15:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:03:57.872 ************************************ 01:03:57.872 START TEST fio_dif_rand_params 01:03:57.872 ************************************ 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:03:57.872 bdev_null0 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:03:57.872 [2024-12-09 10:15:04.682245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:03:57.872 { 01:03:57.872 "params": { 01:03:57.872 "name": "Nvme$subsystem", 01:03:57.872 "trtype": "$TEST_TRANSPORT", 01:03:57.872 "traddr": "$NVMF_FIRST_TARGET_IP", 01:03:57.872 "adrfam": "ipv4", 01:03:57.872 "trsvcid": "$NVMF_PORT", 01:03:57.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:03:57.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:03:57.872 "hdgst": ${hdgst:-false}, 01:03:57.872 "ddgst": ${ddgst:-false} 01:03:57.872 }, 01:03:57.872 "method": "bdev_nvme_attach_controller" 01:03:57.872 } 01:03:57.872 EOF 01:03:57.872 )") 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:03:57.872 "params": { 01:03:57.872 "name": "Nvme0", 01:03:57.872 "trtype": "tcp", 01:03:57.872 "traddr": "10.0.0.3", 01:03:57.872 "adrfam": "ipv4", 01:03:57.872 "trsvcid": "4420", 01:03:57.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:57.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:03:57.872 "hdgst": false, 01:03:57.872 "ddgst": false 01:03:57.872 }, 01:03:57.872 "method": "bdev_nvme_attach_controller" 01:03:57.872 }' 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:03:57.872 10:15:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:03:58.132 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:03:58.133 ... 01:03:58.133 fio-3.35 01:03:58.133 Starting 3 threads 01:04:04.711 01:04:04.711 filename0: (groupid=0, jobs=1): err= 0: pid=110311: Mon Dec 9 10:15:10 2024 01:04:04.711 read: IOPS=259, BW=32.5MiB/s (34.1MB/s)(163MiB/5005msec) 01:04:04.711 slat (nsec): min=5794, max=46055, avg=12905.58, stdev=6871.40 01:04:04.711 clat (usec): min=4155, max=51601, avg=11518.53, stdev=11996.30 01:04:04.711 lat (usec): min=4164, max=51622, avg=11531.44, stdev=11996.30 01:04:04.711 clat percentiles (usec): 01:04:04.711 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5997], 20.00th=[ 6325], 01:04:04.711 | 30.00th=[ 6718], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8586], 01:04:04.711 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[10290], 95.00th=[48497], 01:04:04.711 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 01:04:04.711 | 99.99th=[51643] 01:04:04.711 bw ( KiB/s): min=22272, max=48384, per=29.44%, avg=33280.00, stdev=7510.30, samples=10 01:04:04.711 iops : min= 174, max= 378, avg=260.00, stdev=58.67, samples=10 01:04:04.711 lat (msec) : 10=88.70%, 20=1.84%, 50=8.15%, 100=1.31% 01:04:04.711 cpu : usr=95.94%, sys=2.98%, ctx=11, majf=0, minf=0 01:04:04.711 IO depths : 1=6.8%, 2=93.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:04.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:04.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:04.711 issued rwts: total=1301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:04.711 latency : target=0, window=0, percentile=100.00%, depth=3 01:04:04.711 filename0: (groupid=0, jobs=1): err= 0: pid=110312: Mon Dec 9 10:15:10 2024 01:04:04.711 read: IOPS=381, BW=47.7MiB/s (50.0MB/s)(239MiB/5001msec) 01:04:04.711 slat (nsec): min=5856, max=44215, avg=13428.14, stdev=10053.28 01:04:04.711 clat (usec): min=2771, max=45189, avg=7831.20, stdev=3424.51 01:04:04.711 lat (usec): min=2778, max=45222, avg=7844.63, stdev=3427.08 01:04:04.711 clat percentiles (usec): 01:04:04.711 | 1.00th=[ 3458], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3687], 01:04:04.711 | 30.00th=[ 6783], 40.00th=[ 7111], 50.00th=[ 7504], 60.00th=[ 8029], 01:04:04.711 | 70.00th=[ 9503], 80.00th=[11338], 90.00th=[12125], 95.00th=[12649], 01:04:04.711 | 99.00th=[13304], 99.50th=[13698], 99.90th=[45351], 99.95th=[45351], 01:04:04.711 | 99.99th=[45351] 01:04:04.711 bw ( KiB/s): min=34560, max=57600, per=42.73%, avg=48298.67, stdev=6959.25, samples=9 01:04:04.711 iops : min= 270, max= 450, avg=377.33, stdev=54.37, samples=9 01:04:04.711 lat (msec) : 4=23.79%, 10=47.59%, 20=28.46%, 50=0.16% 01:04:04.711 cpu : usr=93.22%, sys=4.88%, ctx=10, majf=0, minf=0 01:04:04.711 IO depths : 1=29.8%, 2=70.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:04.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:04.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:04.711 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:04.711 latency : target=0, window=0, percentile=100.00%, depth=3 01:04:04.711 filename0: (groupid=0, jobs=1): err= 0: pid=110313: Mon Dec 9 10:15:10 2024 01:04:04.711 read: IOPS=242, BW=30.3MiB/s (31.7MB/s)(151MiB/5002msec) 01:04:04.711 slat (nsec): min=5759, max=40113, avg=11423.01, stdev=6197.25 01:04:04.711 clat (usec): min=3311, max=53569, avg=12371.11, stdev=12687.63 01:04:04.711 lat (usec): min=3317, max=53575, avg=12382.54, stdev=12687.62 01:04:04.711 clat percentiles (usec): 01:04:04.711 | 1.00th=[ 4752], 5.00th=[ 5145], 10.00th=[ 5800], 20.00th=[ 6194], 01:04:04.711 | 30.00th=[ 6521], 40.00th=[ 8160], 50.00th=[ 8979], 60.00th=[ 9372], 01:04:04.711 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[45876], 95.00th=[49021], 01:04:04.711 | 99.00th=[50594], 99.50th=[51643], 99.90th=[53216], 99.95th=[53740], 01:04:04.711 | 99.99th=[53740] 01:04:04.711 bw ( KiB/s): min=19200, max=42240, per=27.90%, avg=31538.00, stdev=6583.71, samples=9 01:04:04.711 iops : min= 150, max= 330, avg=246.33, stdev=51.44, samples=9 01:04:04.711 lat (msec) : 4=0.50%, 10=78.12%, 20=10.73%, 50=7.76%, 100=2.89% 01:04:04.711 cpu : usr=96.06%, sys=3.00%, ctx=18, majf=0, minf=0 01:04:04.711 IO depths : 1=6.4%, 2=93.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:04.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:04.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:04.711 issued rwts: total=1211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:04.711 latency : target=0, window=0, percentile=100.00%, depth=3 01:04:04.711 01:04:04.711 Run status group 0 (all jobs): 01:04:04.711 READ: bw=110MiB/s (116MB/s), 30.3MiB/s-47.7MiB/s (31.7MB/s-50.0MB/s), io=553MiB (579MB), run=5001-5005msec 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.711 bdev_null0 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.711 [2024-12-09 10:15:10.835890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.711 bdev_null1 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.711 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.711 bdev_null2 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:04:04.712 { 01:04:04.712 "params": { 01:04:04.712 "name": "Nvme$subsystem", 01:04:04.712 "trtype": "$TEST_TRANSPORT", 01:04:04.712 "traddr": "$NVMF_FIRST_TARGET_IP", 01:04:04.712 "adrfam": "ipv4", 01:04:04.712 "trsvcid": "$NVMF_PORT", 01:04:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:04:04.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:04:04.712 "hdgst": ${hdgst:-false}, 01:04:04.712 "ddgst": ${ddgst:-false} 01:04:04.712 }, 01:04:04.712 "method": "bdev_nvme_attach_controller" 01:04:04.712 } 01:04:04.712 EOF 01:04:04.712 )") 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:04:04.712 { 01:04:04.712 "params": { 01:04:04.712 "name": "Nvme$subsystem", 01:04:04.712 "trtype": "$TEST_TRANSPORT", 01:04:04.712 "traddr": "$NVMF_FIRST_TARGET_IP", 01:04:04.712 "adrfam": "ipv4", 01:04:04.712 "trsvcid": "$NVMF_PORT", 01:04:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:04:04.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:04:04.712 "hdgst": ${hdgst:-false}, 01:04:04.712 "ddgst": ${ddgst:-false} 01:04:04.712 }, 01:04:04.712 "method": "bdev_nvme_attach_controller" 01:04:04.712 } 01:04:04.712 EOF 01:04:04.712 )") 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:04:04.712 { 01:04:04.712 "params": { 01:04:04.712 "name": "Nvme$subsystem", 01:04:04.712 "trtype": "$TEST_TRANSPORT", 01:04:04.712 "traddr": "$NVMF_FIRST_TARGET_IP", 01:04:04.712 "adrfam": "ipv4", 01:04:04.712 "trsvcid": "$NVMF_PORT", 01:04:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:04:04.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:04:04.712 "hdgst": ${hdgst:-false}, 01:04:04.712 "ddgst": ${ddgst:-false} 01:04:04.712 }, 01:04:04.712 "method": "bdev_nvme_attach_controller" 01:04:04.712 } 01:04:04.712 EOF 01:04:04.712 )") 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:04:04.712 "params": { 01:04:04.712 "name": "Nvme0", 01:04:04.712 "trtype": "tcp", 01:04:04.712 "traddr": "10.0.0.3", 01:04:04.712 "adrfam": "ipv4", 01:04:04.712 "trsvcid": "4420", 01:04:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:04:04.712 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:04:04.712 "hdgst": false, 01:04:04.712 "ddgst": false 01:04:04.712 }, 01:04:04.712 "method": "bdev_nvme_attach_controller" 01:04:04.712 },{ 01:04:04.712 "params": { 01:04:04.712 "name": "Nvme1", 01:04:04.712 "trtype": "tcp", 01:04:04.712 "traddr": "10.0.0.3", 01:04:04.712 "adrfam": "ipv4", 01:04:04.712 "trsvcid": "4420", 01:04:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:04:04.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:04:04.712 "hdgst": false, 01:04:04.712 "ddgst": false 01:04:04.712 }, 01:04:04.712 "method": "bdev_nvme_attach_controller" 01:04:04.712 },{ 01:04:04.712 "params": { 01:04:04.712 "name": "Nvme2", 01:04:04.712 "trtype": "tcp", 01:04:04.712 "traddr": "10.0.0.3", 01:04:04.712 "adrfam": "ipv4", 01:04:04.712 "trsvcid": "4420", 01:04:04.712 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:04:04.712 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:04:04.712 "hdgst": false, 01:04:04.712 "ddgst": false 01:04:04.712 }, 01:04:04.712 "method": "bdev_nvme_attach_controller" 01:04:04.712 }' 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:04:04.712 10:15:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:04:04.712 10:15:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:04:04.712 10:15:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:04:04.712 10:15:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:04:04.712 10:15:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:04:04.712 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:04:04.712 ... 01:04:04.712 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:04:04.712 ... 01:04:04.712 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:04:04.712 ... 01:04:04.712 fio-3.35 01:04:04.712 Starting 24 threads 01:04:16.935 01:04:16.935 filename0: (groupid=0, jobs=1): err= 0: pid=110413: Mon Dec 9 10:15:22 2024 01:04:16.935 read: IOPS=283, BW=1133KiB/s (1160kB/s)(11.1MiB/10045msec) 01:04:16.935 slat (usec): min=5, max=8042, avg=14.08, stdev=150.84 01:04:16.935 clat (msec): min=6, max=237, avg=56.34, stdev=22.79 01:04:16.935 lat (msec): min=6, max=237, avg=56.36, stdev=22.79 01:04:16.935 clat percentiles (msec): 01:04:16.935 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 37], 01:04:16.935 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 56], 60.00th=[ 59], 01:04:16.935 | 70.00th=[ 62], 80.00th=[ 71], 90.00th=[ 85], 95.00th=[ 93], 01:04:16.935 | 99.00th=[ 129], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 239], 01:04:16.935 | 99.99th=[ 239] 01:04:16.935 bw ( KiB/s): min= 592, max= 1897, per=4.66%, avg=1130.05, stdev=276.79, samples=20 01:04:16.935 iops : min= 148, max= 474, avg=282.50, stdev=69.16, samples=20 01:04:16.935 lat (msec) : 10=1.69%, 20=1.37%, 50=43.39%, 100=50.91%, 250=2.64% 01:04:16.935 cpu : usr=35.49%, sys=0.34%, ctx=970, majf=0, minf=9 01:04:16.935 IO depths : 1=1.1%, 2=2.4%, 4=9.4%, 8=74.6%, 16=12.7%, 32=0.0%, >=64=0.0% 01:04:16.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 issued rwts: total=2844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.935 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.935 filename0: (groupid=0, jobs=1): err= 0: pid=110414: Mon Dec 9 10:15:22 2024 01:04:16.935 read: IOPS=240, BW=962KiB/s (985kB/s)(9632KiB/10011msec) 01:04:16.935 slat (usec): min=4, max=4067, avg=21.98, stdev=189.85 01:04:16.935 clat (msec): min=16, max=192, avg=66.35, stdev=23.33 01:04:16.935 lat (msec): min=16, max=192, avg=66.37, stdev=23.33 01:04:16.935 clat percentiles (msec): 01:04:16.935 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 01:04:16.935 | 30.00th=[ 54], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 67], 01:04:16.935 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 110], 01:04:16.935 | 99.00th=[ 130], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 01:04:16.935 | 99.99th=[ 192] 01:04:16.935 bw ( KiB/s): min= 640, max= 1200, per=3.86%, avg=937.05, stdev=151.79, samples=19 01:04:16.935 iops : min= 160, max= 300, avg=234.26, stdev=37.95, samples=19 01:04:16.935 lat (msec) : 20=0.42%, 50=23.50%, 100=69.23%, 250=6.85% 01:04:16.935 cpu : usr=43.16%, sys=0.70%, ctx=1305, majf=0, minf=9 01:04:16.935 IO depths : 1=2.2%, 2=5.2%, 4=15.0%, 8=66.4%, 16=11.3%, 32=0.0%, >=64=0.0% 01:04:16.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 complete : 0=0.0%, 4=91.6%, 8=3.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 issued rwts: total=2408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.935 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.935 filename0: (groupid=0, jobs=1): err= 0: pid=110415: Mon Dec 9 10:15:22 2024 01:04:16.935 read: IOPS=300, BW=1200KiB/s (1229kB/s)(11.8MiB/10033msec) 01:04:16.935 slat (usec): min=5, max=4024, avg=12.04, stdev=73.58 01:04:16.935 clat (msec): min=8, max=169, avg=53.19, stdev=21.15 01:04:16.935 lat (msec): min=8, max=169, avg=53.20, stdev=21.15 01:04:16.935 clat percentiles (msec): 01:04:16.935 | 1.00th=[ 10], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 39], 01:04:16.935 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 51], 60.00th=[ 56], 01:04:16.935 | 70.00th=[ 60], 80.00th=[ 64], 90.00th=[ 79], 95.00th=[ 94], 01:04:16.935 | 99.00th=[ 123], 99.50th=[ 167], 99.90th=[ 169], 99.95th=[ 169], 01:04:16.935 | 99.99th=[ 169] 01:04:16.935 bw ( KiB/s): min= 768, max= 1920, per=4.94%, avg=1200.00, stdev=275.21, samples=20 01:04:16.935 iops : min= 192, max= 480, avg=300.00, stdev=68.80, samples=20 01:04:16.935 lat (msec) : 10=1.13%, 20=2.86%, 50=45.32%, 100=47.48%, 250=3.22% 01:04:16.935 cpu : usr=41.73%, sys=0.42%, ctx=1246, majf=0, minf=9 01:04:16.935 IO depths : 1=0.6%, 2=1.5%, 4=8.0%, 8=76.9%, 16=12.9%, 32=0.0%, >=64=0.0% 01:04:16.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 complete : 0=0.0%, 4=89.5%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 issued rwts: total=3010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.935 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.935 filename0: (groupid=0, jobs=1): err= 0: pid=110416: Mon Dec 9 10:15:22 2024 01:04:16.935 read: IOPS=246, BW=988KiB/s (1012kB/s)(9892KiB/10014msec) 01:04:16.935 slat (usec): min=2, max=12043, avg=18.16, stdev=255.19 01:04:16.935 clat (msec): min=27, max=171, avg=64.63, stdev=22.61 01:04:16.935 lat (msec): min=27, max=171, avg=64.65, stdev=22.62 01:04:16.935 clat percentiles (msec): 01:04:16.935 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 46], 01:04:16.935 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 64], 01:04:16.935 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 95], 95.00th=[ 108], 01:04:16.935 | 99.00th=[ 136], 99.50th=[ 153], 99.90th=[ 171], 99.95th=[ 171], 01:04:16.935 | 99.99th=[ 171] 01:04:16.935 bw ( KiB/s): min= 560, max= 1328, per=4.02%, avg=976.11, stdev=212.60, samples=19 01:04:16.935 iops : min= 140, max= 332, avg=244.00, stdev=53.11, samples=19 01:04:16.935 lat (msec) : 50=26.16%, 100=66.64%, 250=7.20% 01:04:16.935 cpu : usr=42.21%, sys=0.64%, ctx=1243, majf=0, minf=9 01:04:16.935 IO depths : 1=1.4%, 2=3.3%, 4=11.9%, 8=71.5%, 16=11.9%, 32=0.0%, >=64=0.0% 01:04:16.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 issued rwts: total=2473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.935 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.935 filename0: (groupid=0, jobs=1): err= 0: pid=110417: Mon Dec 9 10:15:22 2024 01:04:16.935 read: IOPS=228, BW=913KiB/s (935kB/s)(9136KiB/10010msec) 01:04:16.935 slat (usec): min=3, max=8031, avg=19.44, stdev=237.22 01:04:16.935 clat (msec): min=23, max=213, avg=69.94, stdev=24.81 01:04:16.935 lat (msec): min=23, max=213, avg=69.96, stdev=24.81 01:04:16.935 clat percentiles (msec): 01:04:16.935 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 51], 01:04:16.935 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 72], 01:04:16.935 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 109], 01:04:16.935 | 99.00th=[ 144], 99.50th=[ 213], 99.90th=[ 213], 99.95th=[ 213], 01:04:16.935 | 99.99th=[ 213] 01:04:16.935 bw ( KiB/s): min= 512, max= 1200, per=3.67%, avg=891.63, stdev=177.23, samples=19 01:04:16.935 iops : min= 128, max= 300, avg=222.89, stdev=44.30, samples=19 01:04:16.935 lat (msec) : 50=19.83%, 100=71.89%, 250=8.27% 01:04:16.935 cpu : usr=32.97%, sys=0.23%, ctx=864, majf=0, minf=9 01:04:16.935 IO depths : 1=1.9%, 2=4.3%, 4=12.8%, 8=69.9%, 16=11.2%, 32=0.0%, >=64=0.0% 01:04:16.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 issued rwts: total=2284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.935 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.935 filename0: (groupid=0, jobs=1): err= 0: pid=110418: Mon Dec 9 10:15:22 2024 01:04:16.935 read: IOPS=287, BW=1151KiB/s (1179kB/s)(11.3MiB/10031msec) 01:04:16.935 slat (usec): min=5, max=8018, avg=21.89, stdev=241.57 01:04:16.935 clat (msec): min=7, max=207, avg=55.43, stdev=22.07 01:04:16.935 lat (msec): min=7, max=207, avg=55.46, stdev=22.08 01:04:16.935 clat percentiles (msec): 01:04:16.935 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 40], 01:04:16.935 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 57], 01:04:16.935 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 81], 95.00th=[ 92], 01:04:16.935 | 99.00th=[ 108], 99.50th=[ 209], 99.90th=[ 209], 99.95th=[ 209], 01:04:16.935 | 99.99th=[ 209] 01:04:16.935 bw ( KiB/s): min= 592, max= 1816, per=4.73%, avg=1148.40, stdev=238.39, samples=20 01:04:16.935 iops : min= 148, max= 454, avg=287.10, stdev=59.60, samples=20 01:04:16.935 lat (msec) : 10=1.11%, 20=2.77%, 50=35.37%, 100=58.95%, 250=1.80% 01:04:16.935 cpu : usr=43.45%, sys=0.60%, ctx=1394, majf=0, minf=9 01:04:16.935 IO depths : 1=1.4%, 2=3.4%, 4=13.2%, 8=70.2%, 16=11.8%, 32=0.0%, >=64=0.0% 01:04:16.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 complete : 0=0.0%, 4=90.6%, 8=4.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 issued rwts: total=2887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.935 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.935 filename0: (groupid=0, jobs=1): err= 0: pid=110419: Mon Dec 9 10:15:22 2024 01:04:16.935 read: IOPS=229, BW=916KiB/s (938kB/s)(9188KiB/10028msec) 01:04:16.935 slat (usec): min=3, max=8036, avg=27.35, stdev=334.43 01:04:16.935 clat (msec): min=23, max=226, avg=69.64, stdev=23.59 01:04:16.935 lat (msec): min=23, max=226, avg=69.67, stdev=23.59 01:04:16.935 clat percentiles (msec): 01:04:16.935 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 54], 01:04:16.935 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 72], 01:04:16.935 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 110], 01:04:16.935 | 99.00th=[ 153], 99.50th=[ 192], 99.90th=[ 226], 99.95th=[ 226], 01:04:16.935 | 99.99th=[ 226] 01:04:16.935 bw ( KiB/s): min= 560, max= 1104, per=3.68%, avg=892.74, stdev=161.97, samples=19 01:04:16.935 iops : min= 140, max= 276, avg=223.16, stdev=40.49, samples=19 01:04:16.935 lat (msec) : 50=17.63%, 100=73.66%, 250=8.71% 01:04:16.935 cpu : usr=32.83%, sys=0.35%, ctx=911, majf=0, minf=10 01:04:16.935 IO depths : 1=2.0%, 2=4.4%, 4=14.3%, 8=68.0%, 16=11.3%, 32=0.0%, >=64=0.0% 01:04:16.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 issued rwts: total=2297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.935 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.935 filename0: (groupid=0, jobs=1): err= 0: pid=110420: Mon Dec 9 10:15:22 2024 01:04:16.935 read: IOPS=253, BW=1016KiB/s (1040kB/s)(9.95MiB/10033msec) 01:04:16.935 slat (usec): min=5, max=8009, avg=17.54, stdev=177.53 01:04:16.935 clat (msec): min=10, max=226, avg=62.81, stdev=24.45 01:04:16.935 lat (msec): min=10, max=226, avg=62.83, stdev=24.45 01:04:16.935 clat percentiles (msec): 01:04:16.935 | 1.00th=[ 11], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 47], 01:04:16.935 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 65], 01:04:16.935 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 88], 95.00th=[ 96], 01:04:16.935 | 99.00th=[ 148], 99.50th=[ 190], 99.90th=[ 226], 99.95th=[ 226], 01:04:16.935 | 99.99th=[ 226] 01:04:16.935 bw ( KiB/s): min= 464, max= 1536, per=4.17%, avg=1012.80, stdev=252.55, samples=20 01:04:16.935 iops : min= 116, max= 384, avg=253.20, stdev=63.14, samples=20 01:04:16.935 lat (msec) : 20=2.51%, 50=28.18%, 100=65.50%, 250=3.81% 01:04:16.935 cpu : usr=34.99%, sys=0.47%, ctx=934, majf=0, minf=9 01:04:16.935 IO depths : 1=1.5%, 2=3.4%, 4=11.6%, 8=71.1%, 16=12.4%, 32=0.0%, >=64=0.0% 01:04:16.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 complete : 0=0.0%, 4=90.6%, 8=5.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.935 issued rwts: total=2548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.935 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.935 filename1: (groupid=0, jobs=1): err= 0: pid=110421: Mon Dec 9 10:15:22 2024 01:04:16.935 read: IOPS=230, BW=921KiB/s (943kB/s)(9216KiB/10009msec) 01:04:16.935 slat (usec): min=3, max=8031, avg=21.04, stdev=202.94 01:04:16.936 clat (msec): min=10, max=190, avg=69.36, stdev=22.05 01:04:16.936 lat (msec): min=10, max=190, avg=69.38, stdev=22.05 01:04:16.936 clat percentiles (msec): 01:04:16.936 | 1.00th=[ 24], 5.00th=[ 44], 10.00th=[ 49], 20.00th=[ 54], 01:04:16.936 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 01:04:16.936 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 109], 01:04:16.936 | 99.00th=[ 142], 99.50th=[ 163], 99.90th=[ 192], 99.95th=[ 192], 01:04:16.936 | 99.99th=[ 192] 01:04:16.936 bw ( KiB/s): min= 512, max= 1152, per=3.72%, avg=902.84, stdev=161.37, samples=19 01:04:16.936 iops : min= 128, max= 288, avg=225.68, stdev=40.32, samples=19 01:04:16.936 lat (msec) : 20=0.69%, 50=12.98%, 100=78.91%, 250=7.42% 01:04:16.936 cpu : usr=38.86%, sys=0.41%, ctx=1087, majf=0, minf=9 01:04:16.936 IO depths : 1=3.0%, 2=6.8%, 4=17.8%, 8=62.5%, 16=9.9%, 32=0.0%, >=64=0.0% 01:04:16.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 complete : 0=0.0%, 4=92.1%, 8=2.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.936 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.936 filename1: (groupid=0, jobs=1): err= 0: pid=110422: Mon Dec 9 10:15:22 2024 01:04:16.936 read: IOPS=234, BW=938KiB/s (960kB/s)(9400KiB/10023msec) 01:04:16.936 slat (usec): min=5, max=8037, avg=19.68, stdev=231.42 01:04:16.936 clat (msec): min=24, max=263, avg=68.13, stdev=25.72 01:04:16.936 lat (msec): min=24, max=263, avg=68.15, stdev=25.71 01:04:16.936 clat percentiles (msec): 01:04:16.936 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 49], 01:04:16.936 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 71], 01:04:16.936 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 106], 01:04:16.936 | 99.00th=[ 155], 99.50th=[ 239], 99.90th=[ 264], 99.95th=[ 264], 01:04:16.936 | 99.99th=[ 264] 01:04:16.936 bw ( KiB/s): min= 512, max= 1296, per=3.80%, avg=921.89, stdev=208.74, samples=19 01:04:16.936 iops : min= 128, max= 324, avg=230.47, stdev=52.18, samples=19 01:04:16.936 lat (msec) : 50=24.77%, 100=67.79%, 250=7.23%, 500=0.21% 01:04:16.936 cpu : usr=33.16%, sys=0.30%, ctx=900, majf=0, minf=9 01:04:16.936 IO depths : 1=1.7%, 2=4.0%, 4=13.3%, 8=69.5%, 16=11.4%, 32=0.0%, >=64=0.0% 01:04:16.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 issued rwts: total=2350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.936 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.936 filename1: (groupid=0, jobs=1): err= 0: pid=110423: Mon Dec 9 10:15:22 2024 01:04:16.936 read: IOPS=300, BW=1200KiB/s (1229kB/s)(11.8MiB/10046msec) 01:04:16.936 slat (usec): min=5, max=11020, avg=18.67, stdev=236.86 01:04:16.936 clat (msec): min=8, max=198, avg=53.13, stdev=20.62 01:04:16.936 lat (msec): min=8, max=198, avg=53.15, stdev=20.62 01:04:16.936 clat percentiles (msec): 01:04:16.936 | 1.00th=[ 10], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 38], 01:04:16.936 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 52], 60.00th=[ 56], 01:04:16.936 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 80], 95.00th=[ 91], 01:04:16.936 | 99.00th=[ 108], 99.50th=[ 174], 99.90th=[ 199], 99.95th=[ 199], 01:04:16.936 | 99.99th=[ 199] 01:04:16.936 bw ( KiB/s): min= 768, max= 2064, per=4.94%, avg=1199.60, stdev=296.94, samples=20 01:04:16.936 iops : min= 192, max= 516, avg=299.90, stdev=74.24, samples=20 01:04:16.936 lat (msec) : 10=1.39%, 20=1.72%, 50=44.71%, 100=50.51%, 250=1.66% 01:04:16.936 cpu : usr=43.00%, sys=0.41%, ctx=1370, majf=0, minf=9 01:04:16.936 IO depths : 1=0.6%, 2=1.3%, 4=6.6%, 8=78.2%, 16=13.3%, 32=0.0%, >=64=0.0% 01:04:16.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 complete : 0=0.0%, 4=89.3%, 8=6.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 issued rwts: total=3015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.936 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.936 filename1: (groupid=0, jobs=1): err= 0: pid=110424: Mon Dec 9 10:15:22 2024 01:04:16.936 read: IOPS=244, BW=980KiB/s (1003kB/s)(9828KiB/10032msec) 01:04:16.936 slat (usec): min=3, max=8019, avg=18.74, stdev=207.73 01:04:16.936 clat (msec): min=16, max=194, avg=65.18, stdev=22.41 01:04:16.936 lat (msec): min=16, max=194, avg=65.20, stdev=22.40 01:04:16.936 clat percentiles (msec): 01:04:16.936 | 1.00th=[ 17], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 50], 01:04:16.936 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 65], 01:04:16.936 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 92], 95.00th=[ 102], 01:04:16.936 | 99.00th=[ 122], 99.50th=[ 194], 99.90th=[ 194], 99.95th=[ 194], 01:04:16.936 | 99.99th=[ 194] 01:04:16.936 bw ( KiB/s): min= 640, max= 1269, per=4.01%, avg=973.32, stdev=175.90, samples=19 01:04:16.936 iops : min= 160, max= 317, avg=243.32, stdev=43.95, samples=19 01:04:16.936 lat (msec) : 20=1.30%, 50=21.08%, 100=72.20%, 250=5.41% 01:04:16.936 cpu : usr=42.95%, sys=0.56%, ctx=1369, majf=0, minf=9 01:04:16.936 IO depths : 1=2.8%, 2=6.7%, 4=18.1%, 8=62.5%, 16=9.9%, 32=0.0%, >=64=0.0% 01:04:16.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 complete : 0=0.0%, 4=92.2%, 8=2.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 issued rwts: total=2457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.936 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.936 filename1: (groupid=0, jobs=1): err= 0: pid=110425: Mon Dec 9 10:15:22 2024 01:04:16.936 read: IOPS=243, BW=974KiB/s (997kB/s)(9752KiB/10016msec) 01:04:16.936 slat (usec): min=3, max=7094, avg=18.81, stdev=204.99 01:04:16.936 clat (msec): min=23, max=214, avg=65.55, stdev=22.73 01:04:16.936 lat (msec): min=23, max=214, avg=65.56, stdev=22.73 01:04:16.936 clat percentiles (msec): 01:04:16.936 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 01:04:16.936 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 69], 01:04:16.936 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 105], 01:04:16.936 | 99.00th=[ 125], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 215], 01:04:16.936 | 99.99th=[ 215] 01:04:16.936 bw ( KiB/s): min= 624, max= 1360, per=3.97%, avg=964.79, stdev=186.91, samples=19 01:04:16.936 iops : min= 156, max= 340, avg=241.16, stdev=46.71, samples=19 01:04:16.936 lat (msec) : 50=23.42%, 100=69.98%, 250=6.60% 01:04:16.936 cpu : usr=39.80%, sys=0.44%, ctx=1293, majf=0, minf=9 01:04:16.936 IO depths : 1=1.9%, 2=4.6%, 4=13.5%, 8=68.8%, 16=11.2%, 32=0.0%, >=64=0.0% 01:04:16.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 issued rwts: total=2438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.936 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.936 filename1: (groupid=0, jobs=1): err= 0: pid=110426: Mon Dec 9 10:15:22 2024 01:04:16.936 read: IOPS=242, BW=970KiB/s (993kB/s)(9708KiB/10009msec) 01:04:16.936 slat (usec): min=3, max=8041, avg=19.96, stdev=244.49 01:04:16.936 clat (msec): min=24, max=209, avg=65.80, stdev=23.11 01:04:16.936 lat (msec): min=24, max=209, avg=65.82, stdev=23.11 01:04:16.936 clat percentiles (msec): 01:04:16.936 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 49], 01:04:16.936 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 67], 01:04:16.936 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 103], 01:04:16.936 | 99.00th=[ 130], 99.50th=[ 211], 99.90th=[ 211], 99.95th=[ 211], 01:04:16.936 | 99.99th=[ 211] 01:04:16.936 bw ( KiB/s): min= 512, max= 1384, per=3.96%, avg=961.26, stdev=217.79, samples=19 01:04:16.936 iops : min= 128, max= 346, avg=240.32, stdev=54.45, samples=19 01:04:16.936 lat (msec) : 50=22.29%, 100=72.23%, 250=5.48% 01:04:16.936 cpu : usr=42.94%, sys=0.53%, ctx=1290, majf=0, minf=9 01:04:16.936 IO depths : 1=3.0%, 2=6.7%, 4=16.9%, 8=63.5%, 16=9.9%, 32=0.0%, >=64=0.0% 01:04:16.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 complete : 0=0.0%, 4=91.9%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 issued rwts: total=2427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.936 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.936 filename1: (groupid=0, jobs=1): err= 0: pid=110427: Mon Dec 9 10:15:22 2024 01:04:16.936 read: IOPS=241, BW=965KiB/s (989kB/s)(9660KiB/10006msec) 01:04:16.936 slat (usec): min=2, max=10046, avg=23.87, stdev=265.45 01:04:16.936 clat (msec): min=9, max=230, avg=66.11, stdev=21.91 01:04:16.936 lat (msec): min=9, max=230, avg=66.13, stdev=21.92 01:04:16.936 clat percentiles (msec): 01:04:16.936 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 52], 01:04:16.936 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 67], 01:04:16.936 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 90], 95.00th=[ 107], 01:04:16.936 | 99.00th=[ 122], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 232], 01:04:16.936 | 99.99th=[ 232] 01:04:16.936 bw ( KiB/s): min= 624, max= 1224, per=3.94%, avg=955.42, stdev=170.69, samples=19 01:04:16.936 iops : min= 156, max= 306, avg=238.84, stdev=42.68, samples=19 01:04:16.936 lat (msec) : 10=0.37%, 20=0.29%, 50=17.10%, 100=75.57%, 250=6.67% 01:04:16.936 cpu : usr=43.93%, sys=0.50%, ctx=1302, majf=0, minf=10 01:04:16.936 IO depths : 1=2.1%, 2=4.6%, 4=13.3%, 8=68.9%, 16=11.2%, 32=0.0%, >=64=0.0% 01:04:16.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 complete : 0=0.0%, 4=91.0%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 issued rwts: total=2415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.936 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.936 filename1: (groupid=0, jobs=1): err= 0: pid=110428: Mon Dec 9 10:15:22 2024 01:04:16.936 read: IOPS=268, BW=1073KiB/s (1099kB/s)(10.5MiB/10007msec) 01:04:16.936 slat (usec): min=2, max=8067, avg=24.57, stdev=318.99 01:04:16.936 clat (msec): min=10, max=176, avg=59.50, stdev=20.44 01:04:16.936 lat (msec): min=10, max=176, avg=59.53, stdev=20.44 01:04:16.936 clat percentiles (msec): 01:04:16.936 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 46], 01:04:16.936 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 59], 60.00th=[ 61], 01:04:16.936 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 86], 95.00th=[ 96], 01:04:16.936 | 99.00th=[ 125], 99.50th=[ 163], 99.90th=[ 163], 99.95th=[ 178], 01:04:16.936 | 99.99th=[ 178] 01:04:16.936 bw ( KiB/s): min= 640, max= 1280, per=4.34%, avg=1054.00, stdev=168.38, samples=19 01:04:16.936 iops : min= 160, max= 320, avg=263.47, stdev=42.08, samples=19 01:04:16.936 lat (msec) : 20=0.60%, 50=35.38%, 100=61.12%, 250=2.91% 01:04:16.936 cpu : usr=35.26%, sys=0.34%, ctx=979, majf=0, minf=9 01:04:16.936 IO depths : 1=0.6%, 2=1.6%, 4=8.5%, 8=76.1%, 16=13.3%, 32=0.0%, >=64=0.0% 01:04:16.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 issued rwts: total=2685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.936 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.936 filename2: (groupid=0, jobs=1): err= 0: pid=110429: Mon Dec 9 10:15:22 2024 01:04:16.936 read: IOPS=227, BW=910KiB/s (932kB/s)(9108KiB/10010msec) 01:04:16.936 slat (usec): min=2, max=8039, avg=26.86, stdev=335.95 01:04:16.936 clat (msec): min=10, max=216, avg=70.14, stdev=23.91 01:04:16.936 lat (msec): min=10, max=216, avg=70.16, stdev=23.91 01:04:16.936 clat percentiles (msec): 01:04:16.936 | 1.00th=[ 34], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 49], 01:04:16.936 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 72], 01:04:16.936 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 108], 01:04:16.936 | 99.00th=[ 132], 99.50th=[ 203], 99.90th=[ 203], 99.95th=[ 215], 01:04:16.936 | 99.99th=[ 215] 01:04:16.936 bw ( KiB/s): min= 512, max= 1152, per=3.68%, avg=893.47, stdev=158.53, samples=19 01:04:16.936 iops : min= 128, max= 288, avg=223.37, stdev=39.63, samples=19 01:04:16.936 lat (msec) : 20=0.48%, 50=21.08%, 100=69.61%, 250=8.83% 01:04:16.936 cpu : usr=32.93%, sys=0.30%, ctx=869, majf=0, minf=9 01:04:16.936 IO depths : 1=1.8%, 2=4.4%, 4=13.9%, 8=68.3%, 16=11.6%, 32=0.0%, >=64=0.0% 01:04:16.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 issued rwts: total=2277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.936 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.936 filename2: (groupid=0, jobs=1): err= 0: pid=110430: Mon Dec 9 10:15:22 2024 01:04:16.936 read: IOPS=256, BW=1027KiB/s (1051kB/s)(10.0MiB/10013msec) 01:04:16.936 slat (usec): min=3, max=8007, avg=16.79, stdev=176.57 01:04:16.936 clat (msec): min=17, max=203, avg=62.20, stdev=19.46 01:04:16.936 lat (msec): min=17, max=203, avg=62.22, stdev=19.46 01:04:16.936 clat percentiles (msec): 01:04:16.936 | 1.00th=[ 28], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 48], 01:04:16.936 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 63], 01:04:16.936 | 70.00th=[ 67], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 92], 01:04:16.936 | 99.00th=[ 121], 99.50th=[ 171], 99.90th=[ 203], 99.95th=[ 203], 01:04:16.936 | 99.99th=[ 203] 01:04:16.936 bw ( KiB/s): min= 592, max= 1152, per=4.20%, avg=1020.11, stdev=138.55, samples=19 01:04:16.936 iops : min= 148, max= 288, avg=255.00, stdev=34.61, samples=19 01:04:16.936 lat (msec) : 20=0.39%, 50=25.80%, 100=71.25%, 250=2.57% 01:04:16.936 cpu : usr=41.55%, sys=0.52%, ctx=1233, majf=0, minf=9 01:04:16.936 IO depths : 1=1.8%, 2=4.2%, 4=12.8%, 8=69.5%, 16=11.7%, 32=0.0%, >=64=0.0% 01:04:16.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 complete : 0=0.0%, 4=90.9%, 8=4.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.936 issued rwts: total=2570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.937 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.937 filename2: (groupid=0, jobs=1): err= 0: pid=110431: Mon Dec 9 10:15:22 2024 01:04:16.937 read: IOPS=260, BW=1042KiB/s (1067kB/s)(10.2MiB/10011msec) 01:04:16.937 slat (usec): min=5, max=8030, avg=24.79, stdev=293.71 01:04:16.937 clat (msec): min=11, max=229, avg=61.25, stdev=21.43 01:04:16.937 lat (msec): min=11, max=229, avg=61.28, stdev=21.44 01:04:16.937 clat percentiles (msec): 01:04:16.937 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 45], 01:04:16.937 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 64], 01:04:16.937 | 70.00th=[ 69], 80.00th=[ 77], 90.00th=[ 86], 95.00th=[ 94], 01:04:16.937 | 99.00th=[ 118], 99.50th=[ 207], 99.90th=[ 230], 99.95th=[ 230], 01:04:16.937 | 99.99th=[ 230] 01:04:16.937 bw ( KiB/s): min= 512, max= 1472, per=4.30%, avg=1043.84, stdev=200.02, samples=19 01:04:16.937 iops : min= 128, max= 368, avg=260.95, stdev=50.00, samples=19 01:04:16.937 lat (msec) : 20=0.61%, 50=27.30%, 100=69.52%, 250=2.57% 01:04:16.937 cpu : usr=44.83%, sys=0.56%, ctx=1295, majf=0, minf=9 01:04:16.937 IO depths : 1=1.9%, 2=4.3%, 4=13.5%, 8=69.0%, 16=11.3%, 32=0.0%, >=64=0.0% 01:04:16.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.937 complete : 0=0.0%, 4=90.9%, 8=4.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.937 issued rwts: total=2608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.937 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.937 filename2: (groupid=0, jobs=1): err= 0: pid=110432: Mon Dec 9 10:15:22 2024 01:04:16.937 read: IOPS=230, BW=922KiB/s (944kB/s)(9252KiB/10035msec) 01:04:16.937 slat (usec): min=5, max=8043, avg=27.78, stdev=333.34 01:04:16.937 clat (msec): min=24, max=216, avg=69.20, stdev=23.75 01:04:16.937 lat (msec): min=24, max=216, avg=69.22, stdev=23.76 01:04:16.937 clat percentiles (msec): 01:04:16.937 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 51], 01:04:16.937 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 01:04:16.937 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 01:04:16.937 | 99.00th=[ 144], 99.50th=[ 192], 99.90th=[ 218], 99.95th=[ 218], 01:04:16.937 | 99.99th=[ 218] 01:04:16.937 bw ( KiB/s): min= 560, max= 1152, per=3.78%, avg=918.80, stdev=154.35, samples=20 01:04:16.937 iops : min= 140, max= 288, avg=229.70, stdev=38.59, samples=20 01:04:16.937 lat (msec) : 50=19.76%, 100=71.81%, 250=8.43% 01:04:16.937 cpu : usr=33.18%, sys=0.32%, ctx=899, majf=0, minf=9 01:04:16.937 IO depths : 1=1.7%, 2=4.1%, 4=13.3%, 8=69.3%, 16=11.6%, 32=0.0%, >=64=0.0% 01:04:16.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.937 complete : 0=0.0%, 4=90.6%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.937 issued rwts: total=2313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.937 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.937 filename2: (groupid=0, jobs=1): err= 0: pid=110433: Mon Dec 9 10:15:22 2024 01:04:16.937 read: IOPS=231, BW=925KiB/s (947kB/s)(9264KiB/10013msec) 01:04:16.937 slat (usec): min=3, max=7026, avg=20.96, stdev=205.40 01:04:16.937 clat (msec): min=30, max=226, avg=69.00, stdev=22.62 01:04:16.937 lat (msec): min=30, max=226, avg=69.03, stdev=22.61 01:04:16.937 clat percentiles (msec): 01:04:16.937 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 54], 01:04:16.937 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 72], 01:04:16.937 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 108], 01:04:16.937 | 99.00th=[ 146], 99.50th=[ 192], 99.90th=[ 228], 99.95th=[ 228], 01:04:16.937 | 99.99th=[ 228] 01:04:16.937 bw ( KiB/s): min= 640, max= 1128, per=3.75%, avg=909.42, stdev=149.94, samples=19 01:04:16.937 iops : min= 160, max= 282, avg=227.32, stdev=37.46, samples=19 01:04:16.937 lat (msec) : 50=15.50%, 100=78.28%, 250=6.22% 01:04:16.937 cpu : usr=36.10%, sys=0.49%, ctx=1045, majf=0, minf=9 01:04:16.937 IO depths : 1=2.6%, 2=5.8%, 4=16.5%, 8=64.9%, 16=10.2%, 32=0.0%, >=64=0.0% 01:04:16.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.937 complete : 0=0.0%, 4=91.5%, 8=3.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.937 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.937 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.937 filename2: (groupid=0, jobs=1): err= 0: pid=110434: Mon Dec 9 10:15:22 2024 01:04:16.937 read: IOPS=284, BW=1139KiB/s (1167kB/s)(11.2MiB/10031msec) 01:04:16.937 slat (usec): min=5, max=5013, avg=15.05, stdev=120.09 01:04:16.937 clat (msec): min=10, max=205, avg=56.05, stdev=22.86 01:04:16.937 lat (msec): min=10, max=205, avg=56.06, stdev=22.86 01:04:16.937 clat percentiles (msec): 01:04:16.937 | 1.00th=[ 13], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 39], 01:04:16.937 | 30.00th=[ 44], 40.00th=[ 49], 50.00th=[ 55], 60.00th=[ 58], 01:04:16.937 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 84], 95.00th=[ 95], 01:04:16.937 | 99.00th=[ 117], 99.50th=[ 207], 99.90th=[ 207], 99.95th=[ 207], 01:04:16.937 | 99.99th=[ 207] 01:04:16.937 bw ( KiB/s): min= 640, max= 1936, per=4.68%, avg=1136.40, stdev=300.79, samples=20 01:04:16.937 iops : min= 160, max= 484, avg=284.10, stdev=75.20, samples=20 01:04:16.937 lat (msec) : 20=3.15%, 50=38.47%, 100=54.85%, 250=3.54% 01:04:16.937 cpu : usr=43.10%, sys=0.51%, ctx=1503, majf=0, minf=9 01:04:16.937 IO depths : 1=0.7%, 2=1.4%, 4=7.8%, 8=76.8%, 16=13.3%, 32=0.0%, >=64=0.0% 01:04:16.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.937 complete : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.937 issued rwts: total=2857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.937 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.937 filename2: (groupid=0, jobs=1): err= 0: pid=110435: Mon Dec 9 10:15:22 2024 01:04:16.937 read: IOPS=253, BW=1012KiB/s (1037kB/s)(9.89MiB/10008msec) 01:04:16.937 slat (usec): min=2, max=8034, avg=25.02, stdev=309.66 01:04:16.937 clat (msec): min=17, max=209, avg=63.09, stdev=22.85 01:04:16.937 lat (msec): min=17, max=209, avg=63.12, stdev=22.85 01:04:16.937 clat percentiles (msec): 01:04:16.937 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 47], 01:04:16.937 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 62], 01:04:16.937 | 70.00th=[ 71], 80.00th=[ 83], 90.00th=[ 93], 95.00th=[ 100], 01:04:16.937 | 99.00th=[ 131], 99.50th=[ 201], 99.90th=[ 209], 99.95th=[ 209], 01:04:16.937 | 99.99th=[ 209] 01:04:16.937 bw ( KiB/s): min= 680, max= 1376, per=4.11%, avg=996.63, stdev=180.69, samples=19 01:04:16.937 iops : min= 170, max= 344, avg=249.16, stdev=45.17, samples=19 01:04:16.937 lat (msec) : 20=0.39%, 50=29.85%, 100=64.82%, 250=4.93% 01:04:16.937 cpu : usr=35.64%, sys=0.33%, ctx=930, majf=0, minf=9 01:04:16.937 IO depths : 1=1.0%, 2=2.3%, 4=8.9%, 8=74.8%, 16=13.0%, 32=0.0%, >=64=0.0% 01:04:16.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.937 complete : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.937 issued rwts: total=2533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.937 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.937 filename2: (groupid=0, jobs=1): err= 0: pid=110436: Mon Dec 9 10:15:22 2024 01:04:16.937 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.2MiB/10030msec) 01:04:16.937 slat (usec): min=5, max=4021, avg=14.60, stdev=79.21 01:04:16.937 clat (msec): min=8, max=202, avg=61.18, stdev=22.73 01:04:16.937 lat (msec): min=8, max=202, avg=61.20, stdev=22.72 01:04:16.937 clat percentiles (msec): 01:04:16.937 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 48], 01:04:16.937 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 61], 01:04:16.937 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 86], 95.00th=[ 96], 01:04:16.937 | 99.00th=[ 121], 99.50th=[ 171], 99.90th=[ 203], 99.95th=[ 203], 01:04:16.937 | 99.99th=[ 203] 01:04:16.937 bw ( KiB/s): min= 720, max= 1664, per=4.28%, avg=1039.60, stdev=220.22, samples=20 01:04:16.937 iops : min= 180, max= 416, avg=259.90, stdev=55.05, samples=20 01:04:16.937 lat (msec) : 10=1.22%, 20=1.84%, 50=31.74%, 100=60.69%, 250=4.51% 01:04:16.937 cpu : usr=33.19%, sys=0.25%, ctx=872, majf=0, minf=9 01:04:16.937 IO depths : 1=1.6%, 2=3.5%, 4=11.2%, 8=71.9%, 16=11.7%, 32=0.0%, >=64=0.0% 01:04:16.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.937 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:16.937 issued rwts: total=2615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:16.937 latency : target=0, window=0, percentile=100.00%, depth=16 01:04:16.937 01:04:16.937 Run status group 0 (all jobs): 01:04:16.937 READ: bw=23.7MiB/s (24.8MB/s), 910KiB/s-1200KiB/s (932kB/s-1229kB/s), io=238MiB (250MB), run=10006-10046msec 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.937 bdev_null0 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:04:16.937 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.938 [2024-12-09 10:15:22.576040] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.938 bdev_null1 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:04:16.938 { 01:04:16.938 "params": { 01:04:16.938 "name": "Nvme$subsystem", 01:04:16.938 "trtype": "$TEST_TRANSPORT", 01:04:16.938 "traddr": "$NVMF_FIRST_TARGET_IP", 01:04:16.938 "adrfam": "ipv4", 01:04:16.938 "trsvcid": "$NVMF_PORT", 01:04:16.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:04:16.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:04:16.938 "hdgst": ${hdgst:-false}, 01:04:16.938 "ddgst": ${ddgst:-false} 01:04:16.938 }, 01:04:16.938 "method": "bdev_nvme_attach_controller" 01:04:16.938 } 01:04:16.938 EOF 01:04:16.938 )") 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:04:16.938 { 01:04:16.938 "params": { 01:04:16.938 "name": "Nvme$subsystem", 01:04:16.938 "trtype": "$TEST_TRANSPORT", 01:04:16.938 "traddr": "$NVMF_FIRST_TARGET_IP", 01:04:16.938 "adrfam": "ipv4", 01:04:16.938 "trsvcid": "$NVMF_PORT", 01:04:16.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:04:16.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:04:16.938 "hdgst": ${hdgst:-false}, 01:04:16.938 "ddgst": ${ddgst:-false} 01:04:16.938 }, 01:04:16.938 "method": "bdev_nvme_attach_controller" 01:04:16.938 } 01:04:16.938 EOF 01:04:16.938 )") 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:04:16.938 "params": { 01:04:16.938 "name": "Nvme0", 01:04:16.938 "trtype": "tcp", 01:04:16.938 "traddr": "10.0.0.3", 01:04:16.938 "adrfam": "ipv4", 01:04:16.938 "trsvcid": "4420", 01:04:16.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:04:16.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:04:16.938 "hdgst": false, 01:04:16.938 "ddgst": false 01:04:16.938 }, 01:04:16.938 "method": "bdev_nvme_attach_controller" 01:04:16.938 },{ 01:04:16.938 "params": { 01:04:16.938 "name": "Nvme1", 01:04:16.938 "trtype": "tcp", 01:04:16.938 "traddr": "10.0.0.3", 01:04:16.938 "adrfam": "ipv4", 01:04:16.938 "trsvcid": "4420", 01:04:16.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:04:16.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:04:16.938 "hdgst": false, 01:04:16.938 "ddgst": false 01:04:16.938 }, 01:04:16.938 "method": "bdev_nvme_attach_controller" 01:04:16.938 }' 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:04:16.938 10:15:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:04:16.938 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:04:16.938 ... 01:04:16.938 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:04:16.938 ... 01:04:16.938 fio-3.35 01:04:16.938 Starting 4 threads 01:04:22.252 01:04:22.252 filename0: (groupid=0, jobs=1): err= 0: pid=110568: Mon Dec 9 10:15:28 2024 01:04:22.252 read: IOPS=2469, BW=19.3MiB/s (20.2MB/s)(96.5MiB/5001msec) 01:04:22.252 slat (nsec): min=5580, max=62934, avg=9409.78, stdev=6450.77 01:04:22.252 clat (usec): min=503, max=4980, avg=3191.09, stdev=281.85 01:04:22.252 lat (usec): min=522, max=4986, avg=3200.50, stdev=281.45 01:04:22.252 clat percentiles (usec): 01:04:22.252 | 1.00th=[ 2376], 5.00th=[ 2966], 10.00th=[ 2999], 20.00th=[ 3032], 01:04:22.252 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 01:04:22.252 | 70.00th=[ 3261], 80.00th=[ 3326], 90.00th=[ 3490], 95.00th=[ 3621], 01:04:22.252 | 99.00th=[ 3884], 99.50th=[ 4047], 99.90th=[ 4555], 99.95th=[ 4686], 01:04:22.252 | 99.99th=[ 4883] 01:04:22.252 bw ( KiB/s): min=18304, max=20566, per=25.00%, avg=19693.11, stdev=621.03, samples=9 01:04:22.252 iops : min= 2288, max= 2570, avg=2461.56, stdev=77.50, samples=9 01:04:22.252 lat (usec) : 750=0.02%, 1000=0.21% 01:04:22.252 lat (msec) : 2=0.55%, 4=98.66%, 10=0.57% 01:04:22.252 cpu : usr=96.34%, sys=2.66%, ctx=8, majf=0, minf=0 01:04:22.252 IO depths : 1=10.6%, 2=24.8%, 4=50.2%, 8=14.4%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:22.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:22.252 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:22.252 issued rwts: total=12352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:22.252 latency : target=0, window=0, percentile=100.00%, depth=8 01:04:22.252 filename0: (groupid=0, jobs=1): err= 0: pid=110569: Mon Dec 9 10:15:28 2024 01:04:22.252 read: IOPS=2459, BW=19.2MiB/s (20.1MB/s)(96.1MiB/5002msec) 01:04:22.252 slat (nsec): min=5683, max=75892, avg=15316.15, stdev=10544.49 01:04:22.252 clat (usec): min=1611, max=4939, avg=3186.83, stdev=239.29 01:04:22.252 lat (usec): min=1617, max=4946, avg=3202.15, stdev=238.14 01:04:22.252 clat percentiles (usec): 01:04:22.252 | 1.00th=[ 2507], 5.00th=[ 2933], 10.00th=[ 2966], 20.00th=[ 3032], 01:04:22.252 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3163], 60.00th=[ 3195], 01:04:22.252 | 70.00th=[ 3261], 80.00th=[ 3326], 90.00th=[ 3458], 95.00th=[ 3654], 01:04:22.252 | 99.00th=[ 3916], 99.50th=[ 3982], 99.90th=[ 4686], 99.95th=[ 4817], 01:04:22.252 | 99.99th=[ 4883] 01:04:22.252 bw ( KiB/s): min=18432, max=20096, per=24.87%, avg=19588.33, stdev=484.65, samples=9 01:04:22.252 iops : min= 2304, max= 2512, avg=2448.44, stdev=60.55, samples=9 01:04:22.252 lat (msec) : 2=0.07%, 4=99.52%, 10=0.41% 01:04:22.252 cpu : usr=96.68%, sys=2.42%, ctx=7, majf=0, minf=0 01:04:22.252 IO depths : 1=10.1%, 2=25.0%, 4=50.0%, 8=14.9%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:22.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:22.252 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:22.252 issued rwts: total=12304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:22.252 latency : target=0, window=0, percentile=100.00%, depth=8 01:04:22.252 filename1: (groupid=0, jobs=1): err= 0: pid=110570: Mon Dec 9 10:15:28 2024 01:04:22.252 read: IOPS=2460, BW=19.2MiB/s (20.2MB/s)(96.1MiB/5001msec) 01:04:22.252 slat (nsec): min=5618, max=72356, avg=16672.37, stdev=13070.90 01:04:22.252 clat (usec): min=1185, max=4884, avg=3172.49, stdev=230.27 01:04:22.252 lat (usec): min=1211, max=4891, avg=3189.17, stdev=230.31 01:04:22.252 clat percentiles (usec): 01:04:22.252 | 1.00th=[ 2704], 5.00th=[ 2900], 10.00th=[ 2966], 20.00th=[ 2999], 01:04:22.252 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3163], 01:04:22.252 | 70.00th=[ 3228], 80.00th=[ 3326], 90.00th=[ 3458], 95.00th=[ 3621], 01:04:22.252 | 99.00th=[ 3884], 99.50th=[ 3982], 99.90th=[ 4293], 99.95th=[ 4359], 01:04:22.252 | 99.99th=[ 4817] 01:04:22.252 bw ( KiB/s): min=18320, max=19984, per=24.88%, avg=19596.44, stdev=516.47, samples=9 01:04:22.252 iops : min= 2290, max= 2498, avg=2449.56, stdev=64.56, samples=9 01:04:22.252 lat (msec) : 2=0.07%, 4=99.59%, 10=0.35% 01:04:22.252 cpu : usr=95.68%, sys=3.00%, ctx=7, majf=0, minf=0 01:04:22.252 IO depths : 1=2.4%, 2=25.0%, 4=50.0%, 8=22.6%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:22.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:22.252 complete : 0=0.0%, 4=89.8%, 8=10.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:22.252 issued rwts: total=12303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:22.252 latency : target=0, window=0, percentile=100.00%, depth=8 01:04:22.252 filename1: (groupid=0, jobs=1): err= 0: pid=110571: Mon Dec 9 10:15:28 2024 01:04:22.252 read: IOPS=2458, BW=19.2MiB/s (20.1MB/s)(96.1MiB/5002msec) 01:04:22.252 slat (nsec): min=5668, max=75750, avg=13371.15, stdev=10240.87 01:04:22.252 clat (usec): min=1630, max=6251, avg=3195.59, stdev=273.41 01:04:22.252 lat (usec): min=1650, max=6258, avg=3208.97, stdev=272.30 01:04:22.252 clat percentiles (usec): 01:04:22.252 | 1.00th=[ 2474], 5.00th=[ 2933], 10.00th=[ 2999], 20.00th=[ 3032], 01:04:22.252 | 30.00th=[ 3064], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 01:04:22.252 | 70.00th=[ 3261], 80.00th=[ 3326], 90.00th=[ 3490], 95.00th=[ 3654], 01:04:22.252 | 99.00th=[ 3949], 99.50th=[ 4178], 99.90th=[ 5669], 99.95th=[ 5735], 01:04:22.252 | 99.99th=[ 6063] 01:04:22.252 bw ( KiB/s): min=18304, max=20096, per=24.86%, avg=19584.00, stdev=523.86, samples=9 01:04:22.252 iops : min= 2288, max= 2512, avg=2448.00, stdev=65.48, samples=9 01:04:22.252 lat (msec) : 2=0.34%, 4=98.88%, 10=0.78% 01:04:22.252 cpu : usr=96.50%, sys=2.46%, ctx=49, majf=0, minf=0 01:04:22.252 IO depths : 1=9.9%, 2=22.8%, 4=52.2%, 8=15.1%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:22.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:22.252 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:22.252 issued rwts: total=12296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:22.252 latency : target=0, window=0, percentile=100.00%, depth=8 01:04:22.252 01:04:22.252 Run status group 0 (all jobs): 01:04:22.252 READ: bw=76.9MiB/s (80.7MB/s), 19.2MiB/s-19.3MiB/s (20.1MB/s-20.2MB/s), io=385MiB (403MB), run=5001-5002msec 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:22.252 ************************************ 01:04:22.252 END TEST fio_dif_rand_params 01:04:22.252 ************************************ 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:22.252 01:04:22.252 real 0m24.217s 01:04:22.252 user 2m9.307s 01:04:22.252 sys 0m3.293s 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:22.252 10:15:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:04:22.252 10:15:28 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 01:04:22.252 10:15:28 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:04:22.252 10:15:28 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:22.252 10:15:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:04:22.252 ************************************ 01:04:22.252 START TEST fio_dif_digest 01:04:22.252 ************************************ 01:04:22.252 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 01:04:22.252 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 01:04:22.252 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 01:04:22.252 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 01:04:22.252 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 01:04:22.252 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 01:04:22.252 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 01:04:22.252 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 01:04:22.252 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 01:04:22.252 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 01:04:22.252 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 01:04:22.252 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:04:22.253 bdev_null0 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:04:22.253 [2024-12-09 10:15:28.976760] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:04:22.253 { 01:04:22.253 "params": { 01:04:22.253 "name": "Nvme$subsystem", 01:04:22.253 "trtype": "$TEST_TRANSPORT", 01:04:22.253 "traddr": "$NVMF_FIRST_TARGET_IP", 01:04:22.253 "adrfam": "ipv4", 01:04:22.253 "trsvcid": "$NVMF_PORT", 01:04:22.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:04:22.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:04:22.253 "hdgst": ${hdgst:-false}, 01:04:22.253 "ddgst": ${ddgst:-false} 01:04:22.253 }, 01:04:22.253 "method": "bdev_nvme_attach_controller" 01:04:22.253 } 01:04:22.253 EOF 01:04:22.253 )") 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 01:04:22.253 10:15:28 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 01:04:22.253 10:15:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 01:04:22.253 10:15:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:04:22.253 "params": { 01:04:22.253 "name": "Nvme0", 01:04:22.253 "trtype": "tcp", 01:04:22.253 "traddr": "10.0.0.3", 01:04:22.253 "adrfam": "ipv4", 01:04:22.253 "trsvcid": "4420", 01:04:22.253 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:04:22.253 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:04:22.253 "hdgst": true, 01:04:22.253 "ddgst": true 01:04:22.253 }, 01:04:22.253 "method": "bdev_nvme_attach_controller" 01:04:22.253 }' 01:04:22.253 10:15:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:04:22.253 10:15:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:04:22.253 10:15:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:04:22.253 10:15:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:04:22.253 10:15:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:04:22.253 10:15:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:04:22.253 10:15:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:04:22.253 10:15:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:04:22.253 10:15:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:04:22.253 10:15:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:04:22.253 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:04:22.253 ... 01:04:22.253 fio-3.35 01:04:22.253 Starting 3 threads 01:04:34.469 01:04:34.469 filename0: (groupid=0, jobs=1): err= 0: pid=110677: Mon Dec 9 10:15:39 2024 01:04:34.469 read: IOPS=299, BW=37.5MiB/s (39.3MB/s)(375MiB/10007msec) 01:04:34.469 slat (nsec): min=5939, max=59922, avg=18183.06, stdev=8574.77 01:04:34.469 clat (usec): min=4489, max=62628, avg=9977.14, stdev=2593.42 01:04:34.469 lat (usec): min=4516, max=62657, avg=9995.32, stdev=2593.47 01:04:34.469 clat percentiles (usec): 01:04:34.469 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 01:04:34.469 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 01:04:34.469 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[11076], 01:04:34.469 | 99.00th=[13173], 99.50th=[19792], 99.90th=[50594], 99.95th=[51643], 01:04:34.469 | 99.99th=[62653] 01:04:34.469 bw ( KiB/s): min=31232, max=40448, per=38.07%, avg=38387.20, stdev=2045.43, samples=20 01:04:34.469 iops : min= 244, max= 316, avg=299.90, stdev=15.98, samples=20 01:04:34.469 lat (msec) : 10=61.53%, 20=37.97%, 50=0.30%, 100=0.20% 01:04:34.469 cpu : usr=93.46%, sys=4.78%, ctx=79, majf=0, minf=0 01:04:34.469 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:34.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:34.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:34.469 issued rwts: total=3002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:34.469 latency : target=0, window=0, percentile=100.00%, depth=3 01:04:34.469 filename0: (groupid=0, jobs=1): err= 0: pid=110678: Mon Dec 9 10:15:39 2024 01:04:34.469 read: IOPS=262, BW=32.9MiB/s (34.4MB/s)(329MiB/10003msec) 01:04:34.470 slat (nsec): min=5922, max=85781, avg=17017.28, stdev=7145.03 01:04:34.470 clat (usec): min=3771, max=32369, avg=11393.15, stdev=1523.83 01:04:34.470 lat (usec): min=3787, max=32395, avg=11410.17, stdev=1523.61 01:04:34.470 clat percentiles (usec): 01:04:34.470 | 1.00th=[ 6980], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 01:04:34.470 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 01:04:34.470 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12649], 95.00th=[13042], 01:04:34.470 | 99.00th=[14877], 99.50th=[18482], 99.90th=[29754], 99.95th=[31327], 01:04:34.470 | 99.99th=[32375] 01:04:34.470 bw ( KiB/s): min=31232, max=35840, per=33.34%, avg=33625.60, stdev=1108.98, samples=20 01:04:34.470 iops : min= 244, max= 280, avg=262.70, stdev= 8.66, samples=20 01:04:34.470 lat (msec) : 4=0.04%, 10=6.66%, 20=92.96%, 50=0.34% 01:04:34.470 cpu : usr=93.88%, sys=4.35%, ctx=32, majf=0, minf=0 01:04:34.470 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:34.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:34.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:34.470 issued rwts: total=2629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:34.470 latency : target=0, window=0, percentile=100.00%, depth=3 01:04:34.470 filename0: (groupid=0, jobs=1): err= 0: pid=110679: Mon Dec 9 10:15:39 2024 01:04:34.470 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(282MiB/10004msec) 01:04:34.470 slat (nsec): min=5966, max=77738, avg=21042.41, stdev=7296.88 01:04:34.470 clat (usec): min=5809, max=36168, avg=13292.11, stdev=1628.22 01:04:34.470 lat (usec): min=5826, max=36198, avg=13313.15, stdev=1628.94 01:04:34.470 clat percentiles (usec): 01:04:34.470 | 1.00th=[ 8225], 5.00th=[11994], 10.00th=[12256], 20.00th=[12649], 01:04:34.470 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 01:04:34.470 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 01:04:34.470 | 99.00th=[17433], 99.50th=[21627], 99.90th=[35390], 99.95th=[35914], 01:04:34.470 | 99.99th=[35914] 01:04:34.470 bw ( KiB/s): min=25600, max=30976, per=28.57%, avg=28812.80, stdev=1093.95, samples=20 01:04:34.470 iops : min= 200, max= 242, avg=225.10, stdev= 8.55, samples=20 01:04:34.470 lat (msec) : 10=1.91%, 20=97.43%, 50=0.67% 01:04:34.470 cpu : usr=95.34%, sys=3.45%, ctx=152, majf=0, minf=0 01:04:34.470 IO depths : 1=7.1%, 2=92.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:34.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:34.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:34.470 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:34.470 latency : target=0, window=0, percentile=100.00%, depth=3 01:04:34.470 01:04:34.470 Run status group 0 (all jobs): 01:04:34.470 READ: bw=98.5MiB/s (103MB/s), 28.2MiB/s-37.5MiB/s (29.5MB/s-39.3MB/s), io=986MiB (1033MB), run=10003-10007msec 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:34.470 01:04:34.470 real 0m11.026s 01:04:34.470 user 0m28.914s 01:04:34.470 sys 0m1.589s 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:34.470 10:15:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:04:34.470 ************************************ 01:04:34.470 END TEST fio_dif_digest 01:04:34.470 ************************************ 01:04:34.470 10:15:40 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:04:34.470 10:15:40 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@121 -- # sync 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@124 -- # set +e 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:04:34.470 rmmod nvme_tcp 01:04:34.470 rmmod nvme_fabrics 01:04:34.470 rmmod nvme_keyring 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@128 -- # set -e 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@129 -- # return 0 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 109903 ']' 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 109903 01:04:34.470 10:15:40 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 109903 ']' 01:04:34.470 10:15:40 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 109903 01:04:34.470 10:15:40 nvmf_dif -- common/autotest_common.sh@959 -- # uname 01:04:34.470 10:15:40 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:04:34.470 10:15:40 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109903 01:04:34.470 10:15:40 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:04:34.470 10:15:40 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:04:34.470 10:15:40 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109903' 01:04:34.470 killing process with pid 109903 01:04:34.470 10:15:40 nvmf_dif -- common/autotest_common.sh@973 -- # kill 109903 01:04:34.470 10:15:40 nvmf_dif -- common/autotest_common.sh@978 -- # wait 109903 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:04:34.470 10:15:40 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:04:34.470 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:04:34.470 Waiting for block devices as requested 01:04:34.470 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:04:34.470 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@297 -- # iptr 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 01:04:34.470 10:15:41 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:34.470 10:15:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:04:34.470 10:15:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:34.730 10:15:41 nvmf_dif -- nvmf/common.sh@300 -- # return 0 01:04:34.730 01:04:34.730 real 1m1.649s 01:04:34.730 user 3m57.437s 01:04:34.730 sys 0m12.257s 01:04:34.730 10:15:41 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:34.730 10:15:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:04:34.730 ************************************ 01:04:34.730 END TEST nvmf_dif 01:04:34.731 ************************************ 01:04:34.731 10:15:41 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:04:34.731 10:15:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:04:34.731 10:15:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:34.731 10:15:41 -- common/autotest_common.sh@10 -- # set +x 01:04:34.731 ************************************ 01:04:34.731 START TEST nvmf_abort_qd_sizes 01:04:34.731 ************************************ 01:04:34.731 10:15:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:04:34.731 * Looking for test storage... 01:04:34.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:04:34.731 10:15:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:04:34.731 10:15:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 01:04:34.731 10:15:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:04:34.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:34.991 --rc genhtml_branch_coverage=1 01:04:34.991 --rc genhtml_function_coverage=1 01:04:34.991 --rc genhtml_legend=1 01:04:34.991 --rc geninfo_all_blocks=1 01:04:34.991 --rc geninfo_unexecuted_blocks=1 01:04:34.991 01:04:34.991 ' 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:04:34.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:34.991 --rc genhtml_branch_coverage=1 01:04:34.991 --rc genhtml_function_coverage=1 01:04:34.991 --rc genhtml_legend=1 01:04:34.991 --rc geninfo_all_blocks=1 01:04:34.991 --rc geninfo_unexecuted_blocks=1 01:04:34.991 01:04:34.991 ' 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:04:34.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:34.991 --rc genhtml_branch_coverage=1 01:04:34.991 --rc genhtml_function_coverage=1 01:04:34.991 --rc genhtml_legend=1 01:04:34.991 --rc geninfo_all_blocks=1 01:04:34.991 --rc geninfo_unexecuted_blocks=1 01:04:34.991 01:04:34.991 ' 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:04:34.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:34.991 --rc genhtml_branch_coverage=1 01:04:34.991 --rc genhtml_function_coverage=1 01:04:34.991 --rc genhtml_legend=1 01:04:34.991 --rc geninfo_all_blocks=1 01:04:34.991 --rc geninfo_unexecuted_blocks=1 01:04:34.991 01:04:34.991 ' 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:34.991 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:04:34.992 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:04:34.992 Cannot find device "nvmf_init_br" 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:04:34.992 Cannot find device "nvmf_init_br2" 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:04:34.992 Cannot find device "nvmf_tgt_br" 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:04:34.992 Cannot find device "nvmf_tgt_br2" 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:04:34.992 Cannot find device "nvmf_init_br" 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:04:34.992 Cannot find device "nvmf_init_br2" 01:04:34.992 10:15:41 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 01:04:34.992 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:04:34.992 Cannot find device "nvmf_tgt_br" 01:04:34.992 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 01:04:34.992 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:04:34.992 Cannot find device "nvmf_tgt_br2" 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:04:35.252 Cannot find device "nvmf_br" 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:04:35.252 Cannot find device "nvmf_init_if" 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:04:35.252 Cannot find device "nvmf_init_if2" 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:35.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:35.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:04:35.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:04:35.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 01:04:35.252 01:04:35.252 --- 10.0.0.3 ping statistics --- 01:04:35.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:35.252 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:04:35.252 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:04:35.252 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 01:04:35.252 01:04:35.252 --- 10.0.0.4 ping statistics --- 01:04:35.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:35.252 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:04:35.252 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:04:35.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:04:35.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 01:04:35.253 01:04:35.253 --- 10.0.0.1 ping statistics --- 01:04:35.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:35.253 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 01:04:35.253 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:04:35.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:04:35.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 01:04:35.253 01:04:35.253 --- 10.0.0.2 ping statistics --- 01:04:35.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:35.253 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 01:04:35.253 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:04:35.253 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 01:04:35.253 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:04:35.253 10:15:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:04:36.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:04:36.192 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:04:36.452 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:04:36.452 10:15:43 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:04:36.452 10:15:43 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:04:36.452 10:15:43 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:04:36.452 10:15:43 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:04:36.452 10:15:43 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:04:36.452 10:15:43 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:04:36.452 10:15:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 01:04:36.452 10:15:43 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:04:36.452 10:15:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 01:04:36.452 10:15:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:04:36.453 10:15:43 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=111332 01:04:36.453 10:15:43 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 01:04:36.453 10:15:43 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 111332 01:04:36.453 10:15:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 111332 ']' 01:04:36.453 10:15:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:36.453 10:15:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:36.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:36.453 10:15:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:36.453 10:15:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:36.453 10:15:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:04:36.453 [2024-12-09 10:15:43.401111] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:04:36.453 [2024-12-09 10:15:43.401180] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:04:36.712 [2024-12-09 10:15:43.552129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:04:36.712 [2024-12-09 10:15:43.600512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:04:36.712 [2024-12-09 10:15:43.600578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:04:36.712 [2024-12-09 10:15:43.600585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:04:36.712 [2024-12-09 10:15:43.600590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:04:36.712 [2024-12-09 10:15:43.600595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:04:36.712 [2024-12-09 10:15:43.601534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:04:36.712 [2024-12-09 10:15:43.601638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:04:36.712 [2024-12-09 10:15:43.601727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:04:36.712 [2024-12-09 10:15:43.601733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:04:37.307 10:15:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:37.307 10:15:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 01:04:37.307 10:15:44 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:04:37.307 10:15:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 01:04:37.307 10:15:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:04:37.307 10:15:44 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:04:37.307 10:15:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 01:04:37.307 10:15:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 01:04:37.307 10:15:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 01:04:37.307 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 01:04:37.307 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 01:04:37.307 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 01:04:37.308 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:37.568 10:15:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:04:37.568 ************************************ 01:04:37.568 START TEST spdk_target_abort 01:04:37.568 ************************************ 01:04:37.568 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 01:04:37.568 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 01:04:37.568 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 01:04:37.568 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.568 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:04:37.568 spdk_targetn1 01:04:37.568 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.568 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:04:37.569 [2024-12-09 10:15:44.462393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:04:37.569 [2024-12-09 10:15:44.509578] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:04:37.569 10:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:04:40.863 Initializing NVMe Controllers 01:04:40.863 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:04:40.863 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:04:40.864 Initialization complete. Launching workers. 01:04:40.864 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13352, failed: 0 01:04:40.864 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1158, failed to submit 12194 01:04:40.864 success 745, unsuccessful 413, failed 0 01:04:40.864 10:15:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:04:40.864 10:15:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:04:44.157 Initializing NVMe Controllers 01:04:44.157 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:04:44.157 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:04:44.157 Initialization complete. Launching workers. 01:04:44.157 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5906, failed: 0 01:04:44.157 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1248, failed to submit 4658 01:04:44.157 success 254, unsuccessful 994, failed 0 01:04:44.157 10:15:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:04:44.157 10:15:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:04:47.455 Initializing NVMe Controllers 01:04:47.455 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:04:47.455 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:04:47.455 Initialization complete. Launching workers. 01:04:47.455 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30134, failed: 0 01:04:47.455 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2834, failed to submit 27300 01:04:47.455 success 423, unsuccessful 2411, failed 0 01:04:47.455 10:15:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 01:04:47.455 10:15:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:47.455 10:15:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:04:47.455 10:15:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:47.455 10:15:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 01:04:47.455 10:15:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:47.455 10:15:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 111332 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 111332 ']' 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 111332 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111332 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111332' 01:04:51.649 killing process with pid 111332 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 111332 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 111332 01:04:51.649 01:04:51.649 real 0m14.027s 01:04:51.649 user 0m57.147s 01:04:51.649 sys 0m1.585s 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:51.649 10:15:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:04:51.649 ************************************ 01:04:51.649 END TEST spdk_target_abort 01:04:51.649 ************************************ 01:04:51.649 10:15:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 01:04:51.650 10:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:04:51.650 10:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:51.650 10:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:04:51.650 ************************************ 01:04:51.650 START TEST kernel_target_abort 01:04:51.650 ************************************ 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:04:51.650 10:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:04:52.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:04:52.219 Waiting for block devices as requested 01:04:52.219 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:04:52.219 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:04:52.479 No valid GPT data, bailing 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:04:52.479 No valid GPT data, bailing 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:04:52.479 No valid GPT data, bailing 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:04:52.479 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:04:52.739 No valid GPT data, bailing 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 --hostid=4e796c80-f069-427b-b4d9-5ff46fca0541 -a 10.0.0.1 -t tcp -s 4420 01:04:52.739 01:04:52.739 Discovery Log Number of Records 2, Generation counter 2 01:04:52.739 =====Discovery Log Entry 0====== 01:04:52.739 trtype: tcp 01:04:52.739 adrfam: ipv4 01:04:52.739 subtype: current discovery subsystem 01:04:52.739 treq: not specified, sq flow control disable supported 01:04:52.739 portid: 1 01:04:52.739 trsvcid: 4420 01:04:52.739 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:04:52.739 traddr: 10.0.0.1 01:04:52.739 eflags: none 01:04:52.739 sectype: none 01:04:52.739 =====Discovery Log Entry 1====== 01:04:52.739 trtype: tcp 01:04:52.739 adrfam: ipv4 01:04:52.739 subtype: nvme subsystem 01:04:52.739 treq: not specified, sq flow control disable supported 01:04:52.739 portid: 1 01:04:52.739 trsvcid: 4420 01:04:52.739 subnqn: nqn.2016-06.io.spdk:testnqn 01:04:52.739 traddr: 10.0.0.1 01:04:52.739 eflags: none 01:04:52.739 sectype: none 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:04:52.739 10:15:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:04:56.027 Initializing NVMe Controllers 01:04:56.027 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:04:56.027 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:04:56.027 Initialization complete. Launching workers. 01:04:56.027 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39590, failed: 0 01:04:56.027 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39590, failed to submit 0 01:04:56.027 success 0, unsuccessful 39590, failed 0 01:04:56.027 10:16:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:04:56.027 10:16:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:04:59.345 Initializing NVMe Controllers 01:04:59.345 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:04:59.345 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:04:59.345 Initialization complete. Launching workers. 01:04:59.345 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77220, failed: 0 01:04:59.345 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36965, failed to submit 40255 01:04:59.345 success 0, unsuccessful 36965, failed 0 01:04:59.345 10:16:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:04:59.345 10:16:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:05:02.667 Initializing NVMe Controllers 01:05:02.667 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:05:02.667 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:05:02.667 Initialization complete. Launching workers. 01:05:02.667 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 114944, failed: 0 01:05:02.667 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28768, failed to submit 86176 01:05:02.667 success 0, unsuccessful 28768, failed 0 01:05:02.667 10:16:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 01:05:02.667 10:16:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:05:02.667 10:16:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 01:05:02.667 10:16:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:05:02.667 10:16:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:05:02.667 10:16:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:05:02.667 10:16:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:05:02.667 10:16:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:05:02.667 10:16:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:05:02.667 10:16:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:05:03.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:05:11.360 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:05:11.360 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:05:11.360 01:05:11.360 real 0m19.471s 01:05:11.360 user 0m6.929s 01:05:11.360 sys 0m10.243s 01:05:11.360 10:16:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:05:11.360 ************************************ 01:05:11.360 END TEST kernel_target_abort 01:05:11.360 ************************************ 01:05:11.360 10:16:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:05:11.360 rmmod nvme_tcp 01:05:11.360 rmmod nvme_fabrics 01:05:11.360 rmmod nvme_keyring 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 111332 ']' 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 111332 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 111332 ']' 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 111332 01:05:11.360 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (111332) - No such process 01:05:11.360 Process with pid 111332 is not found 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 111332 is not found' 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:05:11.360 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:05:11.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:05:11.620 Waiting for block devices as requested 01:05:11.620 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:05:11.880 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:05:11.880 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:05:12.140 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:05:12.140 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:05:12.140 10:16:18 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:05:12.140 10:16:19 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:05:12.140 10:16:19 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 01:05:12.140 10:16:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:12.140 10:16:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:05:12.140 10:16:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:12.140 10:16:19 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 01:05:12.140 01:05:12.140 real 0m37.461s 01:05:12.140 user 1m5.387s 01:05:12.140 sys 0m13.778s 01:05:12.140 10:16:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:05:12.140 10:16:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:05:12.140 ************************************ 01:05:12.140 END TEST nvmf_abort_qd_sizes 01:05:12.140 ************************************ 01:05:12.140 10:16:19 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:05:12.140 10:16:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:05:12.140 10:16:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:05:12.140 10:16:19 -- common/autotest_common.sh@10 -- # set +x 01:05:12.140 ************************************ 01:05:12.140 START TEST keyring_file 01:05:12.140 ************************************ 01:05:12.140 10:16:19 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:05:12.401 * Looking for test storage... 01:05:12.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:05:12.401 10:16:19 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:05:12.401 10:16:19 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 01:05:12.401 10:16:19 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:05:12.401 10:16:19 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@344 -- # case "$op" in 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@345 -- # : 1 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@365 -- # decimal 1 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@353 -- # local d=1 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@355 -- # echo 1 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@366 -- # decimal 2 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@353 -- # local d=2 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@355 -- # echo 2 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@368 -- # return 0 01:05:12.401 10:16:19 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:05:12.401 10:16:19 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:05:12.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:12.401 --rc genhtml_branch_coverage=1 01:05:12.401 --rc genhtml_function_coverage=1 01:05:12.401 --rc genhtml_legend=1 01:05:12.401 --rc geninfo_all_blocks=1 01:05:12.401 --rc geninfo_unexecuted_blocks=1 01:05:12.401 01:05:12.401 ' 01:05:12.401 10:16:19 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:05:12.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:12.401 --rc genhtml_branch_coverage=1 01:05:12.401 --rc genhtml_function_coverage=1 01:05:12.401 --rc genhtml_legend=1 01:05:12.401 --rc geninfo_all_blocks=1 01:05:12.401 --rc geninfo_unexecuted_blocks=1 01:05:12.401 01:05:12.401 ' 01:05:12.401 10:16:19 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:05:12.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:12.401 --rc genhtml_branch_coverage=1 01:05:12.401 --rc genhtml_function_coverage=1 01:05:12.401 --rc genhtml_legend=1 01:05:12.401 --rc geninfo_all_blocks=1 01:05:12.401 --rc geninfo_unexecuted_blocks=1 01:05:12.401 01:05:12.401 ' 01:05:12.401 10:16:19 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:05:12.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:12.401 --rc genhtml_branch_coverage=1 01:05:12.401 --rc genhtml_function_coverage=1 01:05:12.401 --rc genhtml_legend=1 01:05:12.401 --rc geninfo_all_blocks=1 01:05:12.401 --rc geninfo_unexecuted_blocks=1 01:05:12.401 01:05:12.401 ' 01:05:12.401 10:16:19 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:05:12.401 10:16:19 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@7 -- # uname -s 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:12.401 10:16:19 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:12.401 10:16:19 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:12.402 10:16:19 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:12.402 10:16:19 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:12.402 10:16:19 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:12.402 10:16:19 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:12.402 10:16:19 keyring_file -- paths/export.sh@5 -- # export PATH 01:05:12.402 10:16:19 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@51 -- # : 0 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:05:12.402 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 01:05:12.402 10:16:19 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:05:12.402 10:16:19 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:05:12.402 10:16:19 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:05:12.402 10:16:19 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 01:05:12.402 10:16:19 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 01:05:12.402 10:16:19 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 01:05:12.402 10:16:19 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:05:12.402 10:16:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:05:12.402 10:16:19 keyring_file -- keyring/common.sh@17 -- # name=key0 01:05:12.402 10:16:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:05:12.402 10:16:19 keyring_file -- keyring/common.sh@17 -- # digest=0 01:05:12.402 10:16:19 keyring_file -- keyring/common.sh@18 -- # mktemp 01:05:12.402 10:16:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BuQhpVqHMf 01:05:12.402 10:16:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:05:12.402 10:16:19 keyring_file -- nvmf/common.sh@733 -- # python - 01:05:12.662 10:16:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BuQhpVqHMf 01:05:12.662 10:16:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BuQhpVqHMf 01:05:12.662 10:16:19 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.BuQhpVqHMf 01:05:12.662 10:16:19 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 01:05:12.662 10:16:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:05:12.662 10:16:19 keyring_file -- keyring/common.sh@17 -- # name=key1 01:05:12.662 10:16:19 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:05:12.662 10:16:19 keyring_file -- keyring/common.sh@17 -- # digest=0 01:05:12.662 10:16:19 keyring_file -- keyring/common.sh@18 -- # mktemp 01:05:12.662 10:16:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2hDlKjhVDI 01:05:12.662 10:16:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:05:12.662 10:16:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:05:12.662 10:16:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:05:12.662 10:16:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:05:12.662 10:16:19 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:05:12.662 10:16:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:05:12.662 10:16:19 keyring_file -- nvmf/common.sh@733 -- # python - 01:05:12.662 10:16:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2hDlKjhVDI 01:05:12.662 10:16:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2hDlKjhVDI 01:05:12.662 10:16:19 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.2hDlKjhVDI 01:05:12.662 10:16:19 keyring_file -- keyring/file.sh@30 -- # tgtpid=112327 01:05:12.662 10:16:19 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:05:12.662 10:16:19 keyring_file -- keyring/file.sh@32 -- # waitforlisten 112327 01:05:12.662 10:16:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 112327 ']' 01:05:12.662 10:16:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:05:12.662 10:16:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:05:12.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:05:12.662 10:16:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:05:12.662 10:16:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:05:12.662 10:16:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:05:12.662 [2024-12-09 10:16:19.610627] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:05:12.662 [2024-12-09 10:16:19.610703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112327 ] 01:05:12.922 [2024-12-09 10:16:19.760038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:12.922 [2024-12-09 10:16:19.807153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:05:13.492 10:16:20 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:05:13.492 10:16:20 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:05:13.492 10:16:20 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 01:05:13.492 10:16:20 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:13.492 10:16:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:05:13.492 [2024-12-09 10:16:20.498534] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:05:13.492 null0 01:05:13.492 [2024-12-09 10:16:20.530418] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:05:13.492 [2024-12-09 10:16:20.530577] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:13.752 10:16:20 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:05:13.752 [2024-12-09 10:16:20.562351] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 01:05:13.752 2024/12/09 10:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 01:05:13.752 request: 01:05:13.752 { 01:05:13.752 "method": "nvmf_subsystem_add_listener", 01:05:13.752 "params": { 01:05:13.752 "nqn": "nqn.2016-06.io.spdk:cnode0", 01:05:13.752 "secure_channel": false, 01:05:13.752 "listen_address": { 01:05:13.752 "trtype": "tcp", 01:05:13.752 "traddr": "127.0.0.1", 01:05:13.752 "trsvcid": "4420" 01:05:13.752 } 01:05:13.752 } 01:05:13.752 } 01:05:13.752 Got JSON-RPC error response 01:05:13.752 GoRPCClient: error on JSON-RPC call 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:05:13.752 10:16:20 keyring_file -- keyring/file.sh@47 -- # bperfpid=112363 01:05:13.752 10:16:20 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 01:05:13.752 10:16:20 keyring_file -- keyring/file.sh@49 -- # waitforlisten 112363 /var/tmp/bperf.sock 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 112363 ']' 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:05:13.752 10:16:20 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:05:13.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:05:13.753 10:16:20 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:05:13.753 10:16:20 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:05:13.753 10:16:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:05:13.753 [2024-12-09 10:16:20.627623] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:05:13.753 [2024-12-09 10:16:20.627685] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112363 ] 01:05:13.753 [2024-12-09 10:16:20.758053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:14.012 [2024-12-09 10:16:20.812020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:05:14.581 10:16:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:05:14.581 10:16:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:05:14.581 10:16:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BuQhpVqHMf 01:05:14.581 10:16:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BuQhpVqHMf 01:05:14.841 10:16:21 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.2hDlKjhVDI 01:05:14.841 10:16:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.2hDlKjhVDI 01:05:15.101 10:16:22 keyring_file -- keyring/file.sh@52 -- # get_key key0 01:05:15.101 10:16:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 01:05:15.101 10:16:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:15.101 10:16:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:05:15.101 10:16:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:15.361 10:16:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BuQhpVqHMf == \/\t\m\p\/\t\m\p\.\B\u\Q\h\p\V\q\H\M\f ]] 01:05:15.361 10:16:22 keyring_file -- keyring/file.sh@53 -- # jq -r .path 01:05:15.361 10:16:22 keyring_file -- keyring/file.sh@53 -- # get_key key1 01:05:15.361 10:16:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:15.361 10:16:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:05:15.361 10:16:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:15.621 10:16:22 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.2hDlKjhVDI == \/\t\m\p\/\t\m\p\.\2\h\D\l\K\j\h\V\D\I ]] 01:05:15.621 10:16:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 01:05:15.621 10:16:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:05:15.621 10:16:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:15.621 10:16:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:15.621 10:16:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:05:15.621 10:16:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:15.880 10:16:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 01:05:15.880 10:16:22 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 01:05:15.880 10:16:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:15.880 10:16:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:05:15.880 10:16:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:15.880 10:16:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:05:15.880 10:16:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:16.141 10:16:23 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 01:05:16.141 10:16:23 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:05:16.141 10:16:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:05:16.401 [2024-12-09 10:16:23.250117] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:05:16.401 nvme0n1 01:05:16.401 10:16:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 01:05:16.401 10:16:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:05:16.401 10:16:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:16.401 10:16:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:16.402 10:16:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:16.402 10:16:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:05:16.661 10:16:23 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 01:05:16.661 10:16:23 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 01:05:16.661 10:16:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:05:16.661 10:16:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:16.661 10:16:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:05:16.661 10:16:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:16.661 10:16:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:16.920 10:16:23 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 01:05:16.920 10:16:23 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:05:16.920 Running I/O for 1 seconds... 01:05:18.301 16664.00 IOPS, 65.09 MiB/s 01:05:18.301 Latency(us) 01:05:18.301 [2024-12-09T10:16:25.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:05:18.301 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 01:05:18.301 nvme0n1 : 1.01 16706.46 65.26 0.00 0.00 7645.66 3348.35 12649.31 01:05:18.301 [2024-12-09T10:16:25.345Z] =================================================================================================================== 01:05:18.301 [2024-12-09T10:16:25.345Z] Total : 16706.46 65.26 0.00 0.00 7645.66 3348.35 12649.31 01:05:18.301 { 01:05:18.301 "results": [ 01:05:18.301 { 01:05:18.301 "job": "nvme0n1", 01:05:18.301 "core_mask": "0x2", 01:05:18.301 "workload": "randrw", 01:05:18.301 "percentage": 50, 01:05:18.301 "status": "finished", 01:05:18.301 "queue_depth": 128, 01:05:18.301 "io_size": 4096, 01:05:18.301 "runtime": 1.00518, 01:05:18.301 "iops": 16706.460534431644, 01:05:18.301 "mibps": 65.25961146262361, 01:05:18.301 "io_failed": 0, 01:05:18.301 "io_timeout": 0, 01:05:18.301 "avg_latency_us": 7645.655632038407, 01:05:18.301 "min_latency_us": 3348.345851528384, 01:05:18.301 "max_latency_us": 12649.30655021834 01:05:18.301 } 01:05:18.301 ], 01:05:18.301 "core_count": 1 01:05:18.301 } 01:05:18.301 10:16:24 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:05:18.301 10:16:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:05:18.301 10:16:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 01:05:18.301 10:16:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:05:18.301 10:16:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:18.301 10:16:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:18.301 10:16:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:05:18.301 10:16:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:18.560 10:16:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 01:05:18.560 10:16:25 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 01:05:18.560 10:16:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:05:18.560 10:16:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:18.560 10:16:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:18.560 10:16:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:18.560 10:16:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:05:18.819 10:16:25 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 01:05:18.819 10:16:25 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:05:18.819 10:16:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:05:18.819 10:16:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:05:18.819 10:16:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:05:18.819 10:16:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:05:18.819 10:16:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:05:18.819 10:16:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:05:18.819 10:16:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:05:18.819 10:16:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:05:19.078 [2024-12-09 10:16:25.923362] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:05:19.078 [2024-12-09 10:16:25.923382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352010 (107): Transport endpoint is not connected 01:05:19.078 [2024-12-09 10:16:25.924371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352010 (9): Bad file descriptor 01:05:19.078 [2024-12-09 10:16:25.925365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:05:19.078 [2024-12-09 10:16:25.925426] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:05:19.078 [2024-12-09 10:16:25.925490] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:05:19.078 [2024-12-09 10:16:25.925537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:05:19.078 2024/12/09 10:16:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:05:19.078 request: 01:05:19.078 { 01:05:19.078 "method": "bdev_nvme_attach_controller", 01:05:19.078 "params": { 01:05:19.078 "name": "nvme0", 01:05:19.078 "trtype": "tcp", 01:05:19.078 "traddr": "127.0.0.1", 01:05:19.078 "adrfam": "ipv4", 01:05:19.078 "trsvcid": "4420", 01:05:19.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:05:19.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:05:19.078 "prchk_reftag": false, 01:05:19.078 "prchk_guard": false, 01:05:19.078 "hdgst": false, 01:05:19.078 "ddgst": false, 01:05:19.078 "psk": "key1", 01:05:19.078 "allow_unrecognized_csi": false 01:05:19.078 } 01:05:19.078 } 01:05:19.078 Got JSON-RPC error response 01:05:19.078 GoRPCClient: error on JSON-RPC call 01:05:19.078 10:16:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:05:19.078 10:16:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:05:19.078 10:16:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:05:19.078 10:16:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:05:19.078 10:16:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 01:05:19.078 10:16:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:05:19.078 10:16:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:19.078 10:16:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:19.078 10:16:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:19.078 10:16:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:05:19.337 10:16:26 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 01:05:19.337 10:16:26 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 01:05:19.337 10:16:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:05:19.337 10:16:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:19.337 10:16:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:05:19.337 10:16:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:19.337 10:16:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:19.597 10:16:26 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 01:05:19.597 10:16:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 01:05:19.597 10:16:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:05:19.597 10:16:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 01:05:19.597 10:16:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 01:05:19.857 10:16:26 keyring_file -- keyring/file.sh@78 -- # jq length 01:05:19.857 10:16:26 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 01:05:19.857 10:16:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:20.118 10:16:27 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 01:05:20.118 10:16:27 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.BuQhpVqHMf 01:05:20.118 10:16:27 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.BuQhpVqHMf 01:05:20.118 10:16:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:05:20.119 10:16:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.BuQhpVqHMf 01:05:20.119 10:16:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:05:20.119 10:16:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:05:20.119 10:16:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:05:20.119 10:16:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:05:20.119 10:16:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BuQhpVqHMf 01:05:20.119 10:16:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BuQhpVqHMf 01:05:20.408 [2024-12-09 10:16:27.266008] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BuQhpVqHMf': 0100660 01:05:20.408 [2024-12-09 10:16:27.266551] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:05:20.408 2024/12/09 10:16:27 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.BuQhpVqHMf], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 01:05:20.408 request: 01:05:20.408 { 01:05:20.408 "method": "keyring_file_add_key", 01:05:20.408 "params": { 01:05:20.408 "name": "key0", 01:05:20.408 "path": "/tmp/tmp.BuQhpVqHMf" 01:05:20.408 } 01:05:20.408 } 01:05:20.408 Got JSON-RPC error response 01:05:20.408 GoRPCClient: error on JSON-RPC call 01:05:20.408 10:16:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:05:20.408 10:16:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:05:20.408 10:16:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:05:20.408 10:16:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:05:20.409 10:16:27 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.BuQhpVqHMf 01:05:20.409 10:16:27 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BuQhpVqHMf 01:05:20.409 10:16:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BuQhpVqHMf 01:05:20.669 10:16:27 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.BuQhpVqHMf 01:05:20.669 10:16:27 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 01:05:20.669 10:16:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:05:20.669 10:16:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:20.669 10:16:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:05:20.669 10:16:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:20.669 10:16:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:20.929 10:16:27 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 01:05:20.929 10:16:27 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:05:20.929 10:16:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:05:20.929 10:16:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:05:20.929 10:16:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:05:20.929 10:16:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:05:20.929 10:16:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:05:20.929 10:16:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:05:20.929 10:16:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:05:20.929 10:16:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:05:20.929 [2024-12-09 10:16:27.964878] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.BuQhpVqHMf': No such file or directory 01:05:20.929 [2024-12-09 10:16:27.965176] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 01:05:20.929 [2024-12-09 10:16:27.965250] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 01:05:20.929 [2024-12-09 10:16:27.965305] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 01:05:20.929 [2024-12-09 10:16:27.965381] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:05:20.929 [2024-12-09 10:16:27.965444] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 01:05:20.929 2024/12/09 10:16:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 01:05:20.929 request: 01:05:20.929 { 01:05:20.929 "method": "bdev_nvme_attach_controller", 01:05:20.929 "params": { 01:05:20.929 "name": "nvme0", 01:05:20.929 "trtype": "tcp", 01:05:20.929 "traddr": "127.0.0.1", 01:05:20.929 "adrfam": "ipv4", 01:05:20.929 "trsvcid": "4420", 01:05:20.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:05:20.929 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:05:20.929 "prchk_reftag": false, 01:05:20.929 "prchk_guard": false, 01:05:20.929 "hdgst": false, 01:05:20.929 "ddgst": false, 01:05:20.929 "psk": "key0", 01:05:20.929 "allow_unrecognized_csi": false 01:05:20.929 } 01:05:20.929 } 01:05:20.929 Got JSON-RPC error response 01:05:20.929 GoRPCClient: error on JSON-RPC call 01:05:21.189 10:16:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:05:21.189 10:16:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:05:21.189 10:16:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:05:21.189 10:16:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:05:21.189 10:16:27 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 01:05:21.189 10:16:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:05:21.189 10:16:28 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:05:21.189 10:16:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:05:21.189 10:16:28 keyring_file -- keyring/common.sh@17 -- # name=key0 01:05:21.189 10:16:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:05:21.189 10:16:28 keyring_file -- keyring/common.sh@17 -- # digest=0 01:05:21.189 10:16:28 keyring_file -- keyring/common.sh@18 -- # mktemp 01:05:21.189 10:16:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1UnYhmbnR4 01:05:21.189 10:16:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:05:21.189 10:16:28 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:05:21.189 10:16:28 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:05:21.189 10:16:28 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:05:21.189 10:16:28 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:05:21.189 10:16:28 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:05:21.189 10:16:28 keyring_file -- nvmf/common.sh@733 -- # python - 01:05:21.449 10:16:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1UnYhmbnR4 01:05:21.449 10:16:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1UnYhmbnR4 01:05:21.449 10:16:28 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.1UnYhmbnR4 01:05:21.449 10:16:28 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1UnYhmbnR4 01:05:21.449 10:16:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1UnYhmbnR4 01:05:21.449 10:16:28 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:05:21.449 10:16:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:05:21.708 nvme0n1 01:05:21.968 10:16:28 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 01:05:21.968 10:16:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:05:21.968 10:16:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:21.968 10:16:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:21.968 10:16:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:05:21.968 10:16:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:21.968 10:16:29 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 01:05:21.968 10:16:29 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 01:05:21.968 10:16:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:05:22.228 10:16:29 keyring_file -- keyring/file.sh@102 -- # get_key key0 01:05:22.228 10:16:29 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 01:05:22.228 10:16:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:22.228 10:16:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:05:22.228 10:16:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:22.487 10:16:29 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 01:05:22.487 10:16:29 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 01:05:22.487 10:16:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:05:22.487 10:16:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:22.487 10:16:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:22.487 10:16:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:05:22.487 10:16:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:22.746 10:16:29 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 01:05:22.746 10:16:29 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:05:22.746 10:16:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:05:23.006 10:16:29 keyring_file -- keyring/file.sh@105 -- # jq length 01:05:23.006 10:16:29 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 01:05:23.006 10:16:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:23.266 10:16:30 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 01:05:23.266 10:16:30 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1UnYhmbnR4 01:05:23.266 10:16:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1UnYhmbnR4 01:05:23.525 10:16:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.2hDlKjhVDI 01:05:23.525 10:16:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.2hDlKjhVDI 01:05:23.785 10:16:30 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:05:23.785 10:16:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:05:24.045 nvme0n1 01:05:24.045 10:16:30 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 01:05:24.045 10:16:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 01:05:24.306 10:16:31 keyring_file -- keyring/file.sh@113 -- # config='{ 01:05:24.306 "subsystems": [ 01:05:24.306 { 01:05:24.306 "subsystem": "keyring", 01:05:24.306 "config": [ 01:05:24.306 { 01:05:24.306 "method": "keyring_file_add_key", 01:05:24.306 "params": { 01:05:24.306 "name": "key0", 01:05:24.306 "path": "/tmp/tmp.1UnYhmbnR4" 01:05:24.306 } 01:05:24.306 }, 01:05:24.306 { 01:05:24.306 "method": "keyring_file_add_key", 01:05:24.306 "params": { 01:05:24.306 "name": "key1", 01:05:24.306 "path": "/tmp/tmp.2hDlKjhVDI" 01:05:24.306 } 01:05:24.306 } 01:05:24.306 ] 01:05:24.306 }, 01:05:24.306 { 01:05:24.306 "subsystem": "iobuf", 01:05:24.306 "config": [ 01:05:24.306 { 01:05:24.306 "method": "iobuf_set_options", 01:05:24.306 "params": { 01:05:24.306 "enable_numa": false, 01:05:24.306 "large_bufsize": 135168, 01:05:24.306 "large_pool_count": 1024, 01:05:24.306 "small_bufsize": 8192, 01:05:24.306 "small_pool_count": 8192 01:05:24.306 } 01:05:24.306 } 01:05:24.306 ] 01:05:24.306 }, 01:05:24.306 { 01:05:24.306 "subsystem": "sock", 01:05:24.306 "config": [ 01:05:24.306 { 01:05:24.306 "method": "sock_set_default_impl", 01:05:24.306 "params": { 01:05:24.306 "impl_name": "posix" 01:05:24.306 } 01:05:24.306 }, 01:05:24.306 { 01:05:24.306 "method": "sock_impl_set_options", 01:05:24.306 "params": { 01:05:24.306 "enable_ktls": false, 01:05:24.306 "enable_placement_id": 0, 01:05:24.306 "enable_quickack": false, 01:05:24.306 "enable_recv_pipe": true, 01:05:24.306 "enable_zerocopy_send_client": false, 01:05:24.306 "enable_zerocopy_send_server": true, 01:05:24.306 "impl_name": "ssl", 01:05:24.306 "recv_buf_size": 4096, 01:05:24.306 "send_buf_size": 4096, 01:05:24.306 "tls_version": 0, 01:05:24.306 "zerocopy_threshold": 0 01:05:24.306 } 01:05:24.306 }, 01:05:24.306 { 01:05:24.306 "method": "sock_impl_set_options", 01:05:24.306 "params": { 01:05:24.306 "enable_ktls": false, 01:05:24.306 "enable_placement_id": 0, 01:05:24.306 "enable_quickack": false, 01:05:24.306 "enable_recv_pipe": true, 01:05:24.306 "enable_zerocopy_send_client": false, 01:05:24.306 "enable_zerocopy_send_server": true, 01:05:24.306 "impl_name": "posix", 01:05:24.306 "recv_buf_size": 2097152, 01:05:24.306 "send_buf_size": 2097152, 01:05:24.306 "tls_version": 0, 01:05:24.306 "zerocopy_threshold": 0 01:05:24.306 } 01:05:24.306 } 01:05:24.306 ] 01:05:24.306 }, 01:05:24.306 { 01:05:24.306 "subsystem": "vmd", 01:05:24.306 "config": [] 01:05:24.306 }, 01:05:24.306 { 01:05:24.306 "subsystem": "accel", 01:05:24.306 "config": [ 01:05:24.306 { 01:05:24.306 "method": "accel_set_options", 01:05:24.306 "params": { 01:05:24.306 "buf_count": 2048, 01:05:24.306 "large_cache_size": 16, 01:05:24.306 "sequence_count": 2048, 01:05:24.306 "small_cache_size": 128, 01:05:24.306 "task_count": 2048 01:05:24.306 } 01:05:24.306 } 01:05:24.306 ] 01:05:24.306 }, 01:05:24.306 { 01:05:24.306 "subsystem": "bdev", 01:05:24.306 "config": [ 01:05:24.306 { 01:05:24.306 "method": "bdev_set_options", 01:05:24.306 "params": { 01:05:24.306 "bdev_auto_examine": true, 01:05:24.306 "bdev_io_cache_size": 256, 01:05:24.306 "bdev_io_pool_size": 65535, 01:05:24.306 "iobuf_large_cache_size": 16, 01:05:24.306 "iobuf_small_cache_size": 128 01:05:24.306 } 01:05:24.306 }, 01:05:24.306 { 01:05:24.306 "method": "bdev_raid_set_options", 01:05:24.306 "params": { 01:05:24.306 "process_max_bandwidth_mb_sec": 0, 01:05:24.306 "process_window_size_kb": 1024 01:05:24.306 } 01:05:24.306 }, 01:05:24.306 { 01:05:24.306 "method": "bdev_iscsi_set_options", 01:05:24.306 "params": { 01:05:24.306 "timeout_sec": 30 01:05:24.306 } 01:05:24.306 }, 01:05:24.306 { 01:05:24.306 "method": "bdev_nvme_set_options", 01:05:24.306 "params": { 01:05:24.306 "action_on_timeout": "none", 01:05:24.307 "allow_accel_sequence": false, 01:05:24.307 "arbitration_burst": 0, 01:05:24.307 "bdev_retry_count": 3, 01:05:24.307 "ctrlr_loss_timeout_sec": 0, 01:05:24.307 "delay_cmd_submit": true, 01:05:24.307 "dhchap_dhgroups": [ 01:05:24.307 "null", 01:05:24.307 "ffdhe2048", 01:05:24.307 "ffdhe3072", 01:05:24.307 "ffdhe4096", 01:05:24.307 "ffdhe6144", 01:05:24.307 "ffdhe8192" 01:05:24.307 ], 01:05:24.307 "dhchap_digests": [ 01:05:24.307 "sha256", 01:05:24.307 "sha384", 01:05:24.307 "sha512" 01:05:24.307 ], 01:05:24.307 "disable_auto_failback": false, 01:05:24.307 "fast_io_fail_timeout_sec": 0, 01:05:24.307 "generate_uuids": false, 01:05:24.307 "high_priority_weight": 0, 01:05:24.307 "io_path_stat": false, 01:05:24.307 "io_queue_requests": 512, 01:05:24.307 "keep_alive_timeout_ms": 10000, 01:05:24.307 "low_priority_weight": 0, 01:05:24.307 "medium_priority_weight": 0, 01:05:24.307 "nvme_adminq_poll_period_us": 10000, 01:05:24.307 "nvme_error_stat": false, 01:05:24.307 "nvme_ioq_poll_period_us": 0, 01:05:24.307 "rdma_cm_event_timeout_ms": 0, 01:05:24.307 "rdma_max_cq_size": 0, 01:05:24.307 "rdma_srq_size": 0, 01:05:24.307 "reconnect_delay_sec": 0, 01:05:24.307 "timeout_admin_us": 0, 01:05:24.307 "timeout_us": 0, 01:05:24.307 "transport_ack_timeout": 0, 01:05:24.307 "transport_retry_count": 4, 01:05:24.307 "transport_tos": 0 01:05:24.307 } 01:05:24.307 }, 01:05:24.307 { 01:05:24.307 "method": "bdev_nvme_attach_controller", 01:05:24.307 "params": { 01:05:24.307 "adrfam": "IPv4", 01:05:24.307 "ctrlr_loss_timeout_sec": 0, 01:05:24.307 "ddgst": false, 01:05:24.307 "fast_io_fail_timeout_sec": 0, 01:05:24.307 "hdgst": false, 01:05:24.307 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:05:24.307 "multipath": "multipath", 01:05:24.307 "name": "nvme0", 01:05:24.307 "prchk_guard": false, 01:05:24.307 "prchk_reftag": false, 01:05:24.307 "psk": "key0", 01:05:24.307 "reconnect_delay_sec": 0, 01:05:24.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:05:24.307 "traddr": "127.0.0.1", 01:05:24.307 "trsvcid": "4420", 01:05:24.307 "trtype": "TCP" 01:05:24.307 } 01:05:24.307 }, 01:05:24.307 { 01:05:24.307 "method": "bdev_nvme_set_hotplug", 01:05:24.307 "params": { 01:05:24.307 "enable": false, 01:05:24.307 "period_us": 100000 01:05:24.307 } 01:05:24.307 }, 01:05:24.307 { 01:05:24.307 "method": "bdev_wait_for_examine" 01:05:24.307 } 01:05:24.307 ] 01:05:24.307 }, 01:05:24.307 { 01:05:24.307 "subsystem": "nbd", 01:05:24.307 "config": [] 01:05:24.307 } 01:05:24.307 ] 01:05:24.307 }' 01:05:24.307 10:16:31 keyring_file -- keyring/file.sh@115 -- # killprocess 112363 01:05:24.307 10:16:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 112363 ']' 01:05:24.307 10:16:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 112363 01:05:24.307 10:16:31 keyring_file -- common/autotest_common.sh@959 -- # uname 01:05:24.307 10:16:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:05:24.307 10:16:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112363 01:05:24.307 killing process with pid 112363 01:05:24.307 Received shutdown signal, test time was about 1.000000 seconds 01:05:24.307 01:05:24.307 Latency(us) 01:05:24.307 [2024-12-09T10:16:31.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:05:24.307 [2024-12-09T10:16:31.351Z] =================================================================================================================== 01:05:24.307 [2024-12-09T10:16:31.351Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:05:24.307 10:16:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:05:24.307 10:16:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:05:24.307 10:16:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112363' 01:05:24.307 10:16:31 keyring_file -- common/autotest_common.sh@973 -- # kill 112363 01:05:24.307 10:16:31 keyring_file -- common/autotest_common.sh@978 -- # wait 112363 01:05:24.568 10:16:31 keyring_file -- keyring/file.sh@118 -- # bperfpid=112830 01:05:24.568 10:16:31 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 01:05:24.568 10:16:31 keyring_file -- keyring/file.sh@120 -- # waitforlisten 112830 /var/tmp/bperf.sock 01:05:24.568 10:16:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 112830 ']' 01:05:24.568 10:16:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:05:24.568 10:16:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:05:24.568 10:16:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:05:24.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:05:24.568 10:16:31 keyring_file -- keyring/file.sh@116 -- # echo '{ 01:05:24.568 "subsystems": [ 01:05:24.568 { 01:05:24.568 "subsystem": "keyring", 01:05:24.568 "config": [ 01:05:24.568 { 01:05:24.568 "method": "keyring_file_add_key", 01:05:24.568 "params": { 01:05:24.568 "name": "key0", 01:05:24.568 "path": "/tmp/tmp.1UnYhmbnR4" 01:05:24.568 } 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "method": "keyring_file_add_key", 01:05:24.568 "params": { 01:05:24.568 "name": "key1", 01:05:24.568 "path": "/tmp/tmp.2hDlKjhVDI" 01:05:24.568 } 01:05:24.568 } 01:05:24.568 ] 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "subsystem": "iobuf", 01:05:24.568 "config": [ 01:05:24.568 { 01:05:24.568 "method": "iobuf_set_options", 01:05:24.568 "params": { 01:05:24.568 "enable_numa": false, 01:05:24.568 "large_bufsize": 135168, 01:05:24.568 "large_pool_count": 1024, 01:05:24.568 "small_bufsize": 8192, 01:05:24.568 "small_pool_count": 8192 01:05:24.568 } 01:05:24.568 } 01:05:24.568 ] 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "subsystem": "sock", 01:05:24.568 "config": [ 01:05:24.568 { 01:05:24.568 "method": "sock_set_default_impl", 01:05:24.568 "params": { 01:05:24.568 "impl_name": "posix" 01:05:24.568 } 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "method": "sock_impl_set_options", 01:05:24.568 "params": { 01:05:24.568 "enable_ktls": false, 01:05:24.568 "enable_placement_id": 0, 01:05:24.568 "enable_quickack": false, 01:05:24.568 "enable_recv_pipe": true, 01:05:24.568 "enable_zerocopy_send_client": false, 01:05:24.568 "enable_zerocopy_send_server": true, 01:05:24.568 "impl_name": "ssl", 01:05:24.568 "recv_buf_size": 4096, 01:05:24.568 "send_buf_size": 4096, 01:05:24.568 "tls_version": 0, 01:05:24.568 "zerocopy_threshold": 0 01:05:24.568 } 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "method": "sock_impl_set_options", 01:05:24.568 "params": { 01:05:24.568 "enable_ktls": false, 01:05:24.568 "enable_placement_id": 0, 01:05:24.568 "enable_quickack": false, 01:05:24.568 "enable_recv_pipe": true, 01:05:24.568 "enable_zerocopy_send_client": false, 01:05:24.568 "enable_zerocopy_send_server": true, 01:05:24.568 "impl_name": "posix", 01:05:24.568 "recv_buf_size": 2097152, 01:05:24.568 "send_buf_size": 2097152, 01:05:24.568 "tls_version": 0, 01:05:24.568 "zerocopy_threshold": 0 01:05:24.568 } 01:05:24.568 } 01:05:24.568 ] 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "subsystem": "vmd", 01:05:24.568 "config": [] 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "subsystem": "accel", 01:05:24.568 "config": [ 01:05:24.568 { 01:05:24.568 "method": "accel_set_options", 01:05:24.568 "params": { 01:05:24.568 "buf_count": 2048, 01:05:24.568 "large_cache_size": 16, 01:05:24.568 "sequence_count": 2048, 01:05:24.568 "small_cache_size": 128, 01:05:24.568 "task_count": 2048 01:05:24.568 } 01:05:24.568 } 01:05:24.568 ] 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "subsystem": "bdev", 01:05:24.568 "config": [ 01:05:24.568 { 01:05:24.568 "method": "bdev_set_options", 01:05:24.568 "params": { 01:05:24.568 "bdev_auto_examine": true, 01:05:24.568 "bdev_io_cache_size": 256, 01:05:24.568 "bdev_io_pool_size": 65535, 01:05:24.568 "iobuf_large_cache_size": 16, 01:05:24.568 "iobuf_small_cache_size": 128 01:05:24.568 } 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "method": "bdev_raid_set_options", 01:05:24.568 "params": { 01:05:24.568 "process_max_bandwidth_mb_sec": 0, 01:05:24.568 "process_window_size_kb": 1024 01:05:24.568 } 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "method": "bdev_iscsi_set_options", 01:05:24.568 "params": { 01:05:24.568 "timeout_sec": 30 01:05:24.568 } 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "method": "bdev_nvme_set_options", 01:05:24.568 "params": { 01:05:24.568 "action_on_timeout": "none", 01:05:24.568 "allow_accel_sequence": false, 01:05:24.568 "arbitration_burst": 0, 01:05:24.568 "bdev_retry_count": 3, 01:05:24.568 "ctrlr_loss_timeout_sec": 0, 01:05:24.568 "delay_cmd_submit": true, 01:05:24.568 "dhchap_dhgroups": [ 01:05:24.568 "null", 01:05:24.568 "ffdhe2048", 01:05:24.568 "ffdhe3072", 01:05:24.568 "ffdhe4096", 01:05:24.568 "ffdhe6144", 01:05:24.568 "ffdhe8192" 01:05:24.568 ], 01:05:24.568 "dhchap_digests": [ 01:05:24.568 "sha256", 01:05:24.568 "sha384", 01:05:24.568 "sha512" 01:05:24.568 ], 01:05:24.568 "disable_auto_failback": false, 01:05:24.568 "fast_io_fail_timeout_sec": 0, 01:05:24.568 "generate_uuids": false, 01:05:24.568 "high_priority_weight": 0, 01:05:24.568 "io_path_stat": false, 01:05:24.568 "io_queue_requests": 512, 01:05:24.568 "keep_alive_timeout_ms": 10000, 01:05:24.568 "low_priority_weight": 0, 01:05:24.568 "medium_pr 10:16:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:05:24.568 iority_weight": 0, 01:05:24.568 "nvme_adminq_poll_period_us": 10000, 01:05:24.568 "nvme_error_stat": false, 01:05:24.568 "nvme_ioq_poll_period_us": 0, 01:05:24.568 "rdma_cm_event_timeout_ms": 0, 01:05:24.568 "rdma_max_cq_size": 0, 01:05:24.568 "rdma_srq_size": 0, 01:05:24.568 "reconnect_delay_sec": 0, 01:05:24.568 "timeout_admin_us": 0, 01:05:24.568 "timeout_us": 0, 01:05:24.568 "transport_ack_timeout": 0, 01:05:24.568 "transport_retry_count": 4, 01:05:24.568 "transport_tos": 0 01:05:24.568 } 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "method": "bdev_nvme_attach_controller", 01:05:24.568 "params": { 01:05:24.568 "adrfam": "IPv4", 01:05:24.568 "ctrlr_loss_timeout_sec": 0, 01:05:24.568 "ddgst": false, 01:05:24.568 "fast_io_fail_timeout_sec": 0, 01:05:24.568 "hdgst": false, 01:05:24.568 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:05:24.568 "multipath": "multipath", 01:05:24.568 "name": "nvme0", 01:05:24.568 "prchk_guard": false, 01:05:24.568 "prchk_reftag": false, 01:05:24.568 "psk": "key0", 01:05:24.568 "reconnect_delay_sec": 0, 01:05:24.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:05:24.568 "traddr": "127.0.0.1", 01:05:24.568 "trsvcid": "4420", 01:05:24.568 "trtype": "TCP" 01:05:24.568 } 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "method": "bdev_nvme_set_hotplug", 01:05:24.568 "params": { 01:05:24.568 "enable": false, 01:05:24.568 "period_us": 100000 01:05:24.568 } 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "method": "bdev_wait_for_examine" 01:05:24.568 } 01:05:24.568 ] 01:05:24.568 }, 01:05:24.568 { 01:05:24.568 "subsystem": "nbd", 01:05:24.568 "config": [] 01:05:24.568 } 01:05:24.568 ] 01:05:24.568 }' 01:05:24.568 10:16:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:05:24.568 [2024-12-09 10:16:31.520452] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:05:24.568 [2024-12-09 10:16:31.521208] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112830 ] 01:05:24.829 [2024-12-09 10:16:31.663772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:24.829 [2024-12-09 10:16:31.737100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:05:25.089 [2024-12-09 10:16:31.955216] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:05:25.658 10:16:32 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:05:25.658 10:16:32 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:05:25.658 10:16:32 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 01:05:25.658 10:16:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:25.658 10:16:32 keyring_file -- keyring/file.sh@121 -- # jq length 01:05:25.658 10:16:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 01:05:25.658 10:16:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 01:05:25.658 10:16:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:25.658 10:16:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:05:25.658 10:16:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:25.658 10:16:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:25.658 10:16:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:05:25.918 10:16:32 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 01:05:25.918 10:16:32 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 01:05:25.918 10:16:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:05:25.918 10:16:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:05:25.918 10:16:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:25.918 10:16:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:25.918 10:16:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:05:26.178 10:16:33 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 01:05:26.178 10:16:33 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 01:05:26.178 10:16:33 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 01:05:26.178 10:16:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 01:05:26.438 10:16:33 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 01:05:26.438 10:16:33 keyring_file -- keyring/file.sh@1 -- # cleanup 01:05:26.438 10:16:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1UnYhmbnR4 /tmp/tmp.2hDlKjhVDI 01:05:26.438 10:16:33 keyring_file -- keyring/file.sh@20 -- # killprocess 112830 01:05:26.438 10:16:33 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 112830 ']' 01:05:26.438 10:16:33 keyring_file -- common/autotest_common.sh@958 -- # kill -0 112830 01:05:26.438 10:16:33 keyring_file -- common/autotest_common.sh@959 -- # uname 01:05:26.438 10:16:33 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:05:26.438 10:16:33 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112830 01:05:26.697 killing process with pid 112830 01:05:26.697 Received shutdown signal, test time was about 1.000000 seconds 01:05:26.697 01:05:26.697 Latency(us) 01:05:26.697 [2024-12-09T10:16:33.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:05:26.697 [2024-12-09T10:16:33.741Z] =================================================================================================================== 01:05:26.697 [2024-12-09T10:16:33.741Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112830' 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@973 -- # kill 112830 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@978 -- # wait 112830 01:05:26.697 10:16:33 keyring_file -- keyring/file.sh@21 -- # killprocess 112327 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 112327 ']' 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@958 -- # kill -0 112327 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@959 -- # uname 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112327 01:05:26.697 killing process with pid 112327 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112327' 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@973 -- # kill 112327 01:05:26.697 10:16:33 keyring_file -- common/autotest_common.sh@978 -- # wait 112327 01:05:27.266 01:05:27.266 real 0m14.895s 01:05:27.266 user 0m36.340s 01:05:27.266 sys 0m3.334s 01:05:27.266 10:16:34 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:05:27.266 10:16:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:05:27.266 ************************************ 01:05:27.266 END TEST keyring_file 01:05:27.266 ************************************ 01:05:27.266 10:16:34 -- spdk/autotest.sh@293 -- # [[ y == y ]] 01:05:27.266 10:16:34 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:05:27.266 10:16:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:05:27.266 10:16:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:05:27.266 10:16:34 -- common/autotest_common.sh@10 -- # set +x 01:05:27.266 ************************************ 01:05:27.266 START TEST keyring_linux 01:05:27.266 ************************************ 01:05:27.266 10:16:34 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:05:27.266 Joined session keyring: 115928486 01:05:27.266 * Looking for test storage... 01:05:27.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:05:27.266 10:16:34 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:05:27.266 10:16:34 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 01:05:27.266 10:16:34 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:05:27.529 10:16:34 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@345 -- # : 1 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@365 -- # decimal 1 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@353 -- # local d=1 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@355 -- # echo 1 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@366 -- # decimal 2 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@353 -- # local d=2 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@355 -- # echo 2 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:05:27.529 10:16:34 keyring_linux -- scripts/common.sh@368 -- # return 0 01:05:27.529 10:16:34 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:05:27.529 10:16:34 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:05:27.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:27.529 --rc genhtml_branch_coverage=1 01:05:27.529 --rc genhtml_function_coverage=1 01:05:27.529 --rc genhtml_legend=1 01:05:27.529 --rc geninfo_all_blocks=1 01:05:27.529 --rc geninfo_unexecuted_blocks=1 01:05:27.529 01:05:27.529 ' 01:05:27.529 10:16:34 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:05:27.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:27.529 --rc genhtml_branch_coverage=1 01:05:27.529 --rc genhtml_function_coverage=1 01:05:27.529 --rc genhtml_legend=1 01:05:27.529 --rc geninfo_all_blocks=1 01:05:27.529 --rc geninfo_unexecuted_blocks=1 01:05:27.529 01:05:27.529 ' 01:05:27.529 10:16:34 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:05:27.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:27.529 --rc genhtml_branch_coverage=1 01:05:27.529 --rc genhtml_function_coverage=1 01:05:27.529 --rc genhtml_legend=1 01:05:27.529 --rc geninfo_all_blocks=1 01:05:27.529 --rc geninfo_unexecuted_blocks=1 01:05:27.529 01:05:27.529 ' 01:05:27.529 10:16:34 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:05:27.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:27.530 --rc genhtml_branch_coverage=1 01:05:27.530 --rc genhtml_function_coverage=1 01:05:27.530 --rc genhtml_legend=1 01:05:27.530 --rc geninfo_all_blocks=1 01:05:27.530 --rc geninfo_unexecuted_blocks=1 01:05:27.530 01:05:27.530 ' 01:05:27.530 10:16:34 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@7 -- # uname -s 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4e796c80-f069-427b-b4d9-5ff46fca0541 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=4e796c80-f069-427b-b4d9-5ff46fca0541 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:27.530 10:16:34 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 01:05:27.530 10:16:34 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:27.530 10:16:34 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:27.530 10:16:34 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:27.530 10:16:34 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:27.530 10:16:34 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:27.530 10:16:34 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:27.530 10:16:34 keyring_linux -- paths/export.sh@5 -- # export PATH 01:05:27.530 10:16:34 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@51 -- # : 0 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:05:27.530 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:05:27.530 10:16:34 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:05:27.530 10:16:34 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:05:27.530 10:16:34 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 01:05:27.530 10:16:34 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 01:05:27.530 10:16:34 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 01:05:27.530 10:16:34 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@17 -- # name=key0 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@733 -- # python - 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 01:05:27.530 /tmp/:spdk-test:key0 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 01:05:27.530 10:16:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:05:27.530 10:16:34 keyring_linux -- nvmf/common.sh@733 -- # python - 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 01:05:27.530 /tmp/:spdk-test:key1 01:05:27.530 10:16:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 01:05:27.530 10:16:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=112988 01:05:27.530 10:16:34 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:05:27.530 10:16:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 112988 01:05:27.530 10:16:34 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 112988 ']' 01:05:27.530 10:16:34 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:05:27.530 10:16:34 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:05:27.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:05:27.530 10:16:34 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:05:27.530 10:16:34 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:05:27.530 10:16:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:05:27.530 [2024-12-09 10:16:34.514388] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:05:27.530 [2024-12-09 10:16:34.514460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112988 ] 01:05:27.796 [2024-12-09 10:16:34.660554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:27.796 [2024-12-09 10:16:34.716876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:05:28.734 10:16:35 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:05:28.734 10:16:35 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:05:28.734 10:16:35 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 01:05:28.734 10:16:35 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:28.734 10:16:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:05:28.734 [2024-12-09 10:16:35.453733] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:05:28.734 null0 01:05:28.734 [2024-12-09 10:16:35.485650] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:05:28.734 [2024-12-09 10:16:35.485816] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:05:28.734 10:16:35 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:28.734 10:16:35 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 01:05:28.734 873908564 01:05:28.734 10:16:35 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 01:05:28.734 765189513 01:05:28.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:05:28.734 10:16:35 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=113024 01:05:28.734 10:16:35 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 113024 /var/tmp/bperf.sock 01:05:28.734 10:16:35 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 113024 ']' 01:05:28.734 10:16:35 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:05:28.734 10:16:35 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 01:05:28.734 10:16:35 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:05:28.734 10:16:35 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:05:28.734 10:16:35 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:05:28.734 10:16:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:05:28.734 [2024-12-09 10:16:35.566918] Starting SPDK v25.01-pre git sha1 b71c8b8dd / DPDK 24.03.0 initialization... 01:05:28.734 [2024-12-09 10:16:35.566975] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113024 ] 01:05:28.734 [2024-12-09 10:16:35.714214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:28.734 [2024-12-09 10:16:35.775096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:05:29.672 10:16:36 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:05:29.672 10:16:36 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:05:29.672 10:16:36 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 01:05:29.672 10:16:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 01:05:29.672 10:16:36 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 01:05:29.672 10:16:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:05:30.242 10:16:36 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:05:30.242 10:16:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:05:30.242 [2024-12-09 10:16:37.187340] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:05:30.242 nvme0n1 01:05:30.501 10:16:37 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 01:05:30.501 10:16:37 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 01:05:30.501 10:16:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:05:30.501 10:16:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:05:30.501 10:16:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:30.501 10:16:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:05:30.501 10:16:37 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 01:05:30.501 10:16:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:05:30.501 10:16:37 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 01:05:30.760 10:16:37 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 01:05:30.760 10:16:37 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 01:05:30.760 10:16:37 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:05:30.760 10:16:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:30.760 10:16:37 keyring_linux -- keyring/linux.sh@25 -- # sn=873908564 01:05:30.760 10:16:37 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 01:05:30.760 10:16:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:05:31.019 10:16:37 keyring_linux -- keyring/linux.sh@26 -- # [[ 873908564 == \8\7\3\9\0\8\5\6\4 ]] 01:05:31.019 10:16:37 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 873908564 01:05:31.019 10:16:37 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 01:05:31.019 10:16:37 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:05:31.019 Running I/O for 1 seconds... 01:05:31.958 18756.00 IOPS, 73.27 MiB/s 01:05:31.958 Latency(us) 01:05:31.958 [2024-12-09T10:16:39.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:05:31.958 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:05:31.958 nvme0n1 : 1.01 18742.53 73.21 0.00 0.00 6801.29 2446.87 8928.92 01:05:31.958 [2024-12-09T10:16:39.002Z] =================================================================================================================== 01:05:31.958 [2024-12-09T10:16:39.002Z] Total : 18742.53 73.21 0.00 0.00 6801.29 2446.87 8928.92 01:05:31.958 { 01:05:31.958 "results": [ 01:05:31.958 { 01:05:31.958 "job": "nvme0n1", 01:05:31.958 "core_mask": "0x2", 01:05:31.958 "workload": "randread", 01:05:31.958 "status": "finished", 01:05:31.958 "queue_depth": 128, 01:05:31.958 "io_size": 4096, 01:05:31.958 "runtime": 1.007548, 01:05:31.958 "iops": 18742.531373195125, 01:05:31.958 "mibps": 73.21301317654346, 01:05:31.958 "io_failed": 0, 01:05:31.958 "io_timeout": 0, 01:05:31.958 "avg_latency_us": 6801.2860246284135, 01:05:31.958 "min_latency_us": 2446.8681222707423, 01:05:31.958 "max_latency_us": 8928.922270742358 01:05:31.958 } 01:05:31.958 ], 01:05:31.958 "core_count": 1 01:05:31.958 } 01:05:31.958 10:16:38 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:05:31.958 10:16:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:05:32.217 10:16:39 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 01:05:32.217 10:16:39 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 01:05:32.217 10:16:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:05:32.217 10:16:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:05:32.217 10:16:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:05:32.217 10:16:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:05:32.477 10:16:39 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 01:05:32.477 10:16:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:05:32.477 10:16:39 keyring_linux -- keyring/linux.sh@23 -- # return 01:05:32.477 10:16:39 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:05:32.477 10:16:39 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 01:05:32.477 10:16:39 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:05:32.477 10:16:39 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:05:32.477 10:16:39 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:05:32.477 10:16:39 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:05:32.477 10:16:39 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:05:32.477 10:16:39 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:05:32.477 10:16:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:05:32.737 [2024-12-09 10:16:39.649669] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:05:32.737 [2024-12-09 10:16:39.650189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc71f0 (107): Transport endpoint is not connected 01:05:32.737 [2024-12-09 10:16:39.651177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc71f0 (9): Bad file descriptor 01:05:32.737 [2024-12-09 10:16:39.652174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:05:32.737 [2024-12-09 10:16:39.652193] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:05:32.737 [2024-12-09 10:16:39.652201] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:05:32.737 [2024-12-09 10:16:39.652209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:05:32.737 2024/12/09 10:16:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:05:32.737 request: 01:05:32.737 { 01:05:32.737 "method": "bdev_nvme_attach_controller", 01:05:32.737 "params": { 01:05:32.737 "name": "nvme0", 01:05:32.737 "trtype": "tcp", 01:05:32.737 "traddr": "127.0.0.1", 01:05:32.737 "adrfam": "ipv4", 01:05:32.737 "trsvcid": "4420", 01:05:32.737 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:05:32.737 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:05:32.737 "prchk_reftag": false, 01:05:32.737 "prchk_guard": false, 01:05:32.737 "hdgst": false, 01:05:32.737 "ddgst": false, 01:05:32.737 "psk": ":spdk-test:key1", 01:05:32.737 "allow_unrecognized_csi": false 01:05:32.737 } 01:05:32.737 } 01:05:32.737 Got JSON-RPC error response 01:05:32.737 GoRPCClient: error on JSON-RPC call 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@655 -- # es=1 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@1 -- # cleanup 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@33 -- # sn=873908564 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 873908564 01:05:32.737 1 links removed 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@33 -- # sn=765189513 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 765189513 01:05:32.737 1 links removed 01:05:32.737 10:16:39 keyring_linux -- keyring/linux.sh@41 -- # killprocess 113024 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 113024 ']' 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 113024 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113024 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:05:32.737 killing process with pid 113024 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113024' 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@973 -- # kill 113024 01:05:32.737 Received shutdown signal, test time was about 1.000000 seconds 01:05:32.737 01:05:32.737 Latency(us) 01:05:32.737 [2024-12-09T10:16:39.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:05:32.737 [2024-12-09T10:16:39.781Z] =================================================================================================================== 01:05:32.737 [2024-12-09T10:16:39.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:05:32.737 10:16:39 keyring_linux -- common/autotest_common.sh@978 -- # wait 113024 01:05:32.997 10:16:39 keyring_linux -- keyring/linux.sh@42 -- # killprocess 112988 01:05:32.997 10:16:39 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 112988 ']' 01:05:32.997 10:16:39 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 112988 01:05:32.997 10:16:39 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:05:32.997 10:16:39 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:05:32.997 10:16:39 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112988 01:05:32.997 10:16:39 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:05:32.997 10:16:39 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:05:32.997 killing process with pid 112988 01:05:32.997 10:16:39 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112988' 01:05:32.997 10:16:39 keyring_linux -- common/autotest_common.sh@973 -- # kill 112988 01:05:32.997 10:16:39 keyring_linux -- common/autotest_common.sh@978 -- # wait 112988 01:05:33.257 01:05:33.257 real 0m6.191s 01:05:33.257 user 0m11.736s 01:05:33.257 sys 0m1.621s 01:05:33.257 10:16:40 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 01:05:33.257 10:16:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:05:33.257 ************************************ 01:05:33.257 END TEST keyring_linux 01:05:33.257 ************************************ 01:05:33.518 10:16:40 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 01:05:33.518 10:16:40 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 01:05:33.518 10:16:40 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 01:05:33.518 10:16:40 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 01:05:33.518 10:16:40 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 01:05:33.518 10:16:40 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 01:05:33.518 10:16:40 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 01:05:33.518 10:16:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 01:05:33.518 10:16:40 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 01:05:33.518 10:16:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 01:05:33.518 10:16:40 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 01:05:33.518 10:16:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 01:05:33.518 10:16:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 01:05:33.518 10:16:40 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 01:05:33.518 10:16:40 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 01:05:33.518 10:16:40 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 01:05:33.518 10:16:40 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 01:05:33.518 10:16:40 -- common/autotest_common.sh@726 -- # xtrace_disable 01:05:33.518 10:16:40 -- common/autotest_common.sh@10 -- # set +x 01:05:33.518 10:16:40 -- spdk/autotest.sh@388 -- # autotest_cleanup 01:05:33.518 10:16:40 -- common/autotest_common.sh@1396 -- # local autotest_es=0 01:05:33.518 10:16:40 -- common/autotest_common.sh@1397 -- # xtrace_disable 01:05:33.518 10:16:40 -- common/autotest_common.sh@10 -- # set +x 01:05:36.055 INFO: APP EXITING 01:05:36.055 INFO: killing all VMs 01:05:36.055 INFO: killing vhost app 01:05:36.055 INFO: EXIT DONE 01:05:36.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:05:36.623 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:05:36.623 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:05:37.561 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:05:37.561 Cleaning 01:05:37.561 Removing: /var/run/dpdk/spdk0/config 01:05:37.561 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:05:37.561 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:05:37.561 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:05:37.561 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:05:37.561 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:05:37.561 Removing: /var/run/dpdk/spdk0/hugepage_info 01:05:37.561 Removing: /var/run/dpdk/spdk1/config 01:05:37.561 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 01:05:37.561 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 01:05:37.561 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 01:05:37.561 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 01:05:37.561 Removing: /var/run/dpdk/spdk1/fbarray_memzone 01:05:37.561 Removing: /var/run/dpdk/spdk1/hugepage_info 01:05:37.561 Removing: /var/run/dpdk/spdk2/config 01:05:37.561 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 01:05:37.561 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 01:05:37.561 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 01:05:37.561 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 01:05:37.561 Removing: /var/run/dpdk/spdk2/fbarray_memzone 01:05:37.561 Removing: /var/run/dpdk/spdk2/hugepage_info 01:05:37.561 Removing: /var/run/dpdk/spdk3/config 01:05:37.821 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 01:05:37.821 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 01:05:37.821 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 01:05:37.821 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 01:05:37.821 Removing: /var/run/dpdk/spdk3/fbarray_memzone 01:05:37.821 Removing: /var/run/dpdk/spdk3/hugepage_info 01:05:37.821 Removing: /var/run/dpdk/spdk4/config 01:05:37.821 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 01:05:37.821 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 01:05:37.821 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 01:05:37.821 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 01:05:37.821 Removing: /var/run/dpdk/spdk4/fbarray_memzone 01:05:37.821 Removing: /var/run/dpdk/spdk4/hugepage_info 01:05:37.821 Removing: /dev/shm/nvmf_trace.0 01:05:37.821 Removing: /dev/shm/spdk_tgt_trace.pid58948 01:05:37.821 Removing: /var/run/dpdk/spdk0 01:05:37.821 Removing: /var/run/dpdk/spdk1 01:05:37.821 Removing: /var/run/dpdk/spdk2 01:05:37.821 Removing: /var/run/dpdk/spdk3 01:05:37.821 Removing: /var/run/dpdk/spdk4 01:05:37.821 Removing: /var/run/dpdk/spdk_pid102485 01:05:37.821 Removing: /var/run/dpdk/spdk_pid102532 01:05:37.821 Removing: /var/run/dpdk/spdk_pid102886 01:05:37.821 Removing: /var/run/dpdk/spdk_pid102931 01:05:37.821 Removing: /var/run/dpdk/spdk_pid103336 01:05:37.821 Removing: /var/run/dpdk/spdk_pid103907 01:05:37.821 Removing: /var/run/dpdk/spdk_pid104337 01:05:37.821 Removing: /var/run/dpdk/spdk_pid105394 01:05:37.821 Removing: /var/run/dpdk/spdk_pid106486 01:05:37.821 Removing: /var/run/dpdk/spdk_pid106614 01:05:37.821 Removing: /var/run/dpdk/spdk_pid106672 01:05:37.821 Removing: /var/run/dpdk/spdk_pid108299 01:05:37.821 Removing: /var/run/dpdk/spdk_pid108625 01:05:37.821 Removing: /var/run/dpdk/spdk_pid108976 01:05:37.821 Removing: /var/run/dpdk/spdk_pid109559 01:05:37.821 Removing: /var/run/dpdk/spdk_pid109564 01:05:37.821 Removing: /var/run/dpdk/spdk_pid109979 01:05:37.821 Removing: /var/run/dpdk/spdk_pid110139 01:05:37.821 Removing: /var/run/dpdk/spdk_pid110301 01:05:37.821 Removing: /var/run/dpdk/spdk_pid110398 01:05:37.821 Removing: /var/run/dpdk/spdk_pid110553 01:05:37.821 Removing: /var/run/dpdk/spdk_pid110672 01:05:37.821 Removing: /var/run/dpdk/spdk_pid111407 01:05:37.821 Removing: /var/run/dpdk/spdk_pid111442 01:05:37.821 Removing: /var/run/dpdk/spdk_pid111472 01:05:37.821 Removing: /var/run/dpdk/spdk_pid111759 01:05:37.821 Removing: /var/run/dpdk/spdk_pid111794 01:05:37.821 Removing: /var/run/dpdk/spdk_pid111824 01:05:37.821 Removing: /var/run/dpdk/spdk_pid112327 01:05:37.821 Removing: /var/run/dpdk/spdk_pid112363 01:05:37.821 Removing: /var/run/dpdk/spdk_pid112830 01:05:37.821 Removing: /var/run/dpdk/spdk_pid112988 01:05:37.821 Removing: /var/run/dpdk/spdk_pid113024 01:05:37.821 Removing: /var/run/dpdk/spdk_pid58795 01:05:37.821 Removing: /var/run/dpdk/spdk_pid58948 01:05:37.821 Removing: /var/run/dpdk/spdk_pid59217 01:05:37.821 Removing: /var/run/dpdk/spdk_pid59310 01:05:37.821 Removing: /var/run/dpdk/spdk_pid59343 01:05:37.821 Removing: /var/run/dpdk/spdk_pid59458 01:05:37.821 Removing: /var/run/dpdk/spdk_pid59488 01:05:37.821 Removing: /var/run/dpdk/spdk_pid59622 01:05:38.081 Removing: /var/run/dpdk/spdk_pid59897 01:05:38.081 Removing: /var/run/dpdk/spdk_pid60081 01:05:38.081 Removing: /var/run/dpdk/spdk_pid60172 01:05:38.081 Removing: /var/run/dpdk/spdk_pid60272 01:05:38.081 Removing: /var/run/dpdk/spdk_pid60375 01:05:38.081 Removing: /var/run/dpdk/spdk_pid60414 01:05:38.081 Removing: /var/run/dpdk/spdk_pid60449 01:05:38.081 Removing: /var/run/dpdk/spdk_pid60519 01:05:38.081 Removing: /var/run/dpdk/spdk_pid60632 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61259 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61323 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61385 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61409 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61490 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61519 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61587 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61615 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61672 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61702 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61748 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61778 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61933 01:05:38.081 Removing: /var/run/dpdk/spdk_pid61968 01:05:38.081 Removing: /var/run/dpdk/spdk_pid62046 01:05:38.081 Removing: /var/run/dpdk/spdk_pid62551 01:05:38.081 Removing: /var/run/dpdk/spdk_pid62940 01:05:38.081 Removing: /var/run/dpdk/spdk_pid65478 01:05:38.081 Removing: /var/run/dpdk/spdk_pid65529 01:05:38.081 Removing: /var/run/dpdk/spdk_pid65902 01:05:38.081 Removing: /var/run/dpdk/spdk_pid65952 01:05:38.081 Removing: /var/run/dpdk/spdk_pid66362 01:05:38.081 Removing: /var/run/dpdk/spdk_pid66943 01:05:38.081 Removing: /var/run/dpdk/spdk_pid67393 01:05:38.081 Removing: /var/run/dpdk/spdk_pid68465 01:05:38.081 Removing: /var/run/dpdk/spdk_pid69544 01:05:38.081 Removing: /var/run/dpdk/spdk_pid69661 01:05:38.081 Removing: /var/run/dpdk/spdk_pid69729 01:05:38.081 Removing: /var/run/dpdk/spdk_pid71356 01:05:38.081 Removing: /var/run/dpdk/spdk_pid71705 01:05:38.081 Removing: /var/run/dpdk/spdk_pid75647 01:05:38.081 Removing: /var/run/dpdk/spdk_pid76087 01:05:38.081 Removing: /var/run/dpdk/spdk_pid76738 01:05:38.081 Removing: /var/run/dpdk/spdk_pid77282 01:05:38.081 Removing: /var/run/dpdk/spdk_pid82919 01:05:38.081 Removing: /var/run/dpdk/spdk_pid83417 01:05:38.081 Removing: /var/run/dpdk/spdk_pid83527 01:05:38.081 Removing: /var/run/dpdk/spdk_pid83680 01:05:38.081 Removing: /var/run/dpdk/spdk_pid83738 01:05:38.081 Removing: /var/run/dpdk/spdk_pid83785 01:05:38.081 Removing: /var/run/dpdk/spdk_pid83843 01:05:38.081 Removing: /var/run/dpdk/spdk_pid84015 01:05:38.081 Removing: /var/run/dpdk/spdk_pid84166 01:05:38.081 Removing: /var/run/dpdk/spdk_pid84462 01:05:38.081 Removing: /var/run/dpdk/spdk_pid84587 01:05:38.081 Removing: /var/run/dpdk/spdk_pid84845 01:05:38.081 Removing: /var/run/dpdk/spdk_pid84971 01:05:38.081 Removing: /var/run/dpdk/spdk_pid85101 01:05:38.081 Removing: /var/run/dpdk/spdk_pid85495 01:05:38.081 Removing: /var/run/dpdk/spdk_pid85958 01:05:38.340 Removing: /var/run/dpdk/spdk_pid85959 01:05:38.340 Removing: /var/run/dpdk/spdk_pid85960 01:05:38.340 Removing: /var/run/dpdk/spdk_pid86256 01:05:38.340 Removing: /var/run/dpdk/spdk_pid86541 01:05:38.340 Removing: /var/run/dpdk/spdk_pid86971 01:05:38.340 Removing: /var/run/dpdk/spdk_pid87340 01:05:38.340 Removing: /var/run/dpdk/spdk_pid87960 01:05:38.340 Removing: /var/run/dpdk/spdk_pid87963 01:05:38.340 Removing: /var/run/dpdk/spdk_pid88355 01:05:38.340 Removing: /var/run/dpdk/spdk_pid88380 01:05:38.340 Removing: /var/run/dpdk/spdk_pid88394 01:05:38.340 Removing: /var/run/dpdk/spdk_pid88426 01:05:38.340 Removing: /var/run/dpdk/spdk_pid88433 01:05:38.340 Removing: /var/run/dpdk/spdk_pid88848 01:05:38.340 Removing: /var/run/dpdk/spdk_pid88897 01:05:38.340 Removing: /var/run/dpdk/spdk_pid89288 01:05:38.340 Removing: /var/run/dpdk/spdk_pid89546 01:05:38.340 Removing: /var/run/dpdk/spdk_pid90085 01:05:38.340 Removing: /var/run/dpdk/spdk_pid90729 01:05:38.340 Removing: /var/run/dpdk/spdk_pid92127 01:05:38.340 Removing: /var/run/dpdk/spdk_pid92783 01:05:38.340 Removing: /var/run/dpdk/spdk_pid92789 01:05:38.340 Removing: /var/run/dpdk/spdk_pid94859 01:05:38.340 Removing: /var/run/dpdk/spdk_pid94945 01:05:38.340 Removing: /var/run/dpdk/spdk_pid95035 01:05:38.340 Removing: /var/run/dpdk/spdk_pid95121 01:05:38.340 Removing: /var/run/dpdk/spdk_pid95284 01:05:38.340 Removing: /var/run/dpdk/spdk_pid95369 01:05:38.340 Removing: /var/run/dpdk/spdk_pid95459 01:05:38.340 Removing: /var/run/dpdk/spdk_pid95538 01:05:38.340 Removing: /var/run/dpdk/spdk_pid95946 01:05:38.340 Removing: /var/run/dpdk/spdk_pid96732 01:05:38.340 Removing: /var/run/dpdk/spdk_pid98142 01:05:38.340 Removing: /var/run/dpdk/spdk_pid98347 01:05:38.340 Removing: /var/run/dpdk/spdk_pid98633 01:05:38.340 Removing: /var/run/dpdk/spdk_pid99185 01:05:38.340 Removing: /var/run/dpdk/spdk_pid99585 01:05:38.340 Clean 01:05:38.340 10:16:45 -- common/autotest_common.sh@1453 -- # return 0 01:05:38.340 10:16:45 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 01:05:38.340 10:16:45 -- common/autotest_common.sh@732 -- # xtrace_disable 01:05:38.340 10:16:45 -- common/autotest_common.sh@10 -- # set +x 01:05:38.599 10:16:45 -- spdk/autotest.sh@391 -- # timing_exit autotest 01:05:38.599 10:16:45 -- common/autotest_common.sh@732 -- # xtrace_disable 01:05:38.599 10:16:45 -- common/autotest_common.sh@10 -- # set +x 01:05:38.599 10:16:45 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:05:38.599 10:16:45 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 01:05:38.599 10:16:45 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 01:05:38.599 10:16:45 -- spdk/autotest.sh@396 -- # [[ y == y ]] 01:05:38.599 10:16:45 -- spdk/autotest.sh@398 -- # hostname 01:05:38.599 10:16:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 01:05:38.859 geninfo: WARNING: invalid characters removed from testname! 01:06:05.415 10:17:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:06:06.790 10:17:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:06:09.341 10:17:16 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:06:11.883 10:17:18 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:06:14.413 10:17:21 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:06:16.316 10:17:23 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:06:18.894 10:17:25 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:06:18.894 10:17:25 -- spdk/autorun.sh@1 -- $ timing_finish 01:06:18.894 10:17:25 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 01:06:18.894 10:17:25 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:06:18.894 10:17:25 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 01:06:18.894 10:17:25 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:06:18.894 + [[ -n 5423 ]] 01:06:18.894 + sudo kill 5423 01:06:18.903 [Pipeline] } 01:06:18.919 [Pipeline] // timeout 01:06:18.924 [Pipeline] } 01:06:18.937 [Pipeline] // stage 01:06:18.942 [Pipeline] } 01:06:18.957 [Pipeline] // catchError 01:06:18.966 [Pipeline] stage 01:06:18.968 [Pipeline] { (Stop VM) 01:06:18.981 [Pipeline] sh 01:06:19.260 + vagrant halt 01:06:22.545 ==> default: Halting domain... 01:06:30.677 [Pipeline] sh 01:06:30.959 + vagrant destroy -f 01:06:34.266 ==> default: Removing domain... 01:06:34.277 [Pipeline] sh 01:06:34.558 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 01:06:34.567 [Pipeline] } 01:06:34.584 [Pipeline] // stage 01:06:34.591 [Pipeline] } 01:06:34.604 [Pipeline] // dir 01:06:34.610 [Pipeline] } 01:06:34.626 [Pipeline] // wrap 01:06:34.633 [Pipeline] } 01:06:34.645 [Pipeline] // catchError 01:06:34.656 [Pipeline] stage 01:06:34.658 [Pipeline] { (Epilogue) 01:06:34.672 [Pipeline] sh 01:06:34.956 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:06:41.542 [Pipeline] catchError 01:06:41.545 [Pipeline] { 01:06:41.560 [Pipeline] sh 01:06:41.846 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:06:41.846 Artifacts sizes are good 01:06:41.855 [Pipeline] } 01:06:41.870 [Pipeline] // catchError 01:06:41.882 [Pipeline] archiveArtifacts 01:06:41.889 Archiving artifacts 01:06:42.027 [Pipeline] cleanWs 01:06:42.050 [WS-CLEANUP] Deleting project workspace... 01:06:42.050 [WS-CLEANUP] Deferred wipeout is used... 01:06:42.069 [WS-CLEANUP] done 01:06:42.071 [Pipeline] } 01:06:42.088 [Pipeline] // stage 01:06:42.094 [Pipeline] } 01:06:42.109 [Pipeline] // node 01:06:42.115 [Pipeline] End of Pipeline 01:06:42.154 Finished: SUCCESS